【问题标题】:Exception in thread "main" java.lang.NoClassDefFoundError: org/jsoup/Jsoup线程“主”java.lang.NoClassDefFoundError 中的异常:org/jsoup/Jsoup
【发布时间】:2020-09-28 17:14:48
【问题描述】:

我从互联网上复制了一个简单的网络爬虫,然后开始在测试类中运行该应用程序。每次我尝试运行应用程序时,我都会收到“线程“主”java.lang.NoClassDefFoundError:org/jsoup/Jsoup 中的异常”错误。我首先将 jsoup jar 作为外部 jar 导入到库中,因为我需要它来存储 http 的东西。

错误信息:

Exception in thread "main" java.lang.NoClassDefFoundError: org/jsoup/Jsoup
    at com.copiedcrawler.SpiderLeg.crawl(SpiderLeg.java:35)
    at com.copiedcrawler.Spider.search(Spider.java:40)
    at com.copiedcrawler.SpiderTest.main(SpiderTest.java:9)
Caused by: java.lang.ClassNotFoundException: org.jsoup.Jsoup
    at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:602)
    at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
    at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:522)
    ... 3 more

蜘蛛类

package com.copiedcrawler;


import java.util.HashSet;
import java.util.LinkedList;
import java.util.List;
import java.util.Set;

public class Spider
{
  private static final int MAX_PAGES_TO_SEARCH = 10;
  private Set<String> pagesVisited = new HashSet<String>();
  private List<String> pagesToVisit = new LinkedList<String>();

  public void search(String url, String searchWord)
  {
      while(this.pagesVisited.size() < MAX_PAGES_TO_SEARCH)
      {
          String currentUrl;
          SpiderLeg leg = new SpiderLeg();
          if(this.pagesToVisit.isEmpty())
          {
              currentUrl = url;
              this.pagesVisited.add(url);
          }
          else
          {
              currentUrl = this.nextUrl();
          }
          leg.crawl(currentUrl); // Lots of stuff happening here. Look at the crawl method in
                                 // SpiderLeg
          boolean success = leg.searchForWord(searchWord);
          if(success)
          {
              System.out.println(String.format("**Success** Word %s found at %s", searchWord, currentUrl));
              break;
          }
          this.pagesToVisit.addAll(leg.getLinks());
      }
      System.out.println("\n**Done** Visited " + this.pagesVisited.size() + " web page(s)");
  }


  /**
   * Returns the next URL to visit (in the order that they were found). We also do a check to make
   * sure this method doesn't return a URL that has already been visited.
   * 
   * @return
   */
  private String nextUrl()
  {
      String nextUrl;
      do
      {
          nextUrl = this.pagesToVisit.remove(0);
      } while(this.pagesVisited.contains(nextUrl));
      this.pagesVisited.add(nextUrl);
      return nextUrl;
  }
}

SpiderLeg 类

package com.copiedcrawler;


import java.io.IOException;
import java.util.LinkedList;
import java.util.List;

import org.jsoup.Connection;
import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
import org.jsoup.nodes.Element;
import org.jsoup.select.Elements;

public class SpiderLeg
{
    // We'll use a fake USER_AGENT so the web server thinks the robot is a normal web browser.
    private static final String USER_AGENT =
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.1 (KHTML, like Gecko) Chrome/13.0.782.112 Safari/535.1";
    private List<String> links = new LinkedList<String>();
    private Document htmlDocument;


    /**
     * This performs all the work. It makes an HTTP request, checks the response, and then gathers
     * up all the links on the page. Perform a searchForWord after the successful crawl
     * 
     * @param url
     *            - The URL to visit
     * @return whether or not the crawl was successful
     */
    public boolean crawl(String url)
    {
        try
        {
            Connection connection = Jsoup.connect(url).userAgent(USER_AGENT);
            Document htmlDocument = connection.get();
            this.htmlDocument = htmlDocument;
            if(connection.response().statusCode() == 200) // 200 is the HTTP OK status code
                                                          // indicating that everything is great.
            {
                System.out.println("\n**Visiting** Received web page at " + url);
            }
            if(!connection.response().contentType().contains("text/html"))
            {
                System.out.println("**Failure** Retrieved something other than HTML");
                return false;
            }
            Elements linksOnPage = htmlDocument.select("a[href]");
            System.out.println("Found (" + linksOnPage.size() + ") links");
            for(Element link : linksOnPage)
            {
                this.links.add(link.absUrl("href"));
            }
            return true;
        }
        catch(IOException ioe)
        {
            // We were not successful in our HTTP request
            return false;
        }
    }


    /**
     * Performs a search on the body of on the HTML document that is retrieved. This method should
     * only be called after a successful crawl.
     * 
     * @param searchWord
     *            - The word or string to look for
     * @return whether or not the word was found
     */
    public boolean searchForWord(String searchWord)
    {
        // Defensive coding. This method should only be used after a successful crawl.
        if(this.htmlDocument == null)
        {
            System.out.println("ERROR! Call crawl() before performing analysis on the document");
            return false;
        }
        System.out.println("Searching for the word " + searchWord + "...");
        String bodyText = this.htmlDocument.body().text();
        return bodyText.toLowerCase().contains(searchWord.toLowerCase());
    }


    public List<String> getLinks()
    {
        return this.links;
    }

}

SpiderTest 类

包 com.copiedcrawler;

public class SpiderTest {

    public static void main(String[] args) {
        // TODO Auto-generated method stub
        Spider s1 = new Spider();
        s1.search("https://www.w3schools.com/html/", "html");
    }

}

【问题讨论】:

  • 对我来说很好。你用的是什么 Eclipse 版本,你用的是什么 java 版本?
  • 这是一个很长的Stack Overflow answer,关于您提到的异常......这里是关于使用 Eclipse 的importing JAR's 的简短但直接的答案。
  • 如果您有module-info.java 文件,请将其删除。否则,请展示您如何将 jsoup JAR 添加到 Java 构建路径。
  • 我使用的是 Eclipse June 2020 版本和 JDK 14.0.2
  • 起初我通过点击构建路径上的Projekt属性添加了Jsoup,然后添加了外部jar。后来我用 libs 文件夹尝试了它,但这也不起作用

标签: java eclipse jsoup noclassdeffounderror


【解决方案1】:

基于 stacktrace,您正在从命令行运行 java 程序,但您忘记将 jsoup 添加到类路径中。尝试运行

java -cp classes:libs/jsoup.jar com.copiedcrawler.SpiderTest

classes 是你的程序编译的地方,而 libs 是一个包含库的文件夹。

【讨论】:

  • 我通过项目的属性添加了 jar --> java build bath --> 添加外部 jar --> 下载文件夹中的 jar 文件。现在我用 lib 文件夹尝试了它,但它也不起作用。也许是因为我使用了错误的 java 版本。目前我正在使用最新的 eclipse 版本和 14 JDK。我真的不知道我是否需要 jar 或这个 jar 的特定 java 版本。
  • 在这种情况下,我强烈建议使用 Maven 进行依赖管理。然后你就不需要手动设置所有库并依赖于确切的 IDE。
  • 既然您是“从互联网上复制的”,您还应该将该建议传递给您复制的软件的作者。
猜你喜欢
  • 1970-01-01
  • 2020-05-09
  • 2017-09-28
  • 1970-01-01
  • 1970-01-01
  • 1970-01-01
  • 2017-07-20
  • 2018-05-29
  • 2021-09-03
相关资源
最近更新 更多