迪巴拉 发表于 2024-10-15 19:29:38

【免费分享】屏蔽AI蜘蛛和防止网站文章采集方法

我从最经济实惠,简单粗暴开始说;不说废话,直接开整。

方法一:域名DNS托管到cloudflare,一键屏蔽AI爬虫

如果访问不了cloudflare,那就需要自己搞定梯子。
(国内域名几乎不影响访问速度,有些人会觉得使用国内DNS速度快,其实速度差不多)

方法二:宝塔防火墙设置屏蔽AI爬虫(我用的是破解版宝塔,免费版不知道能不能设置)
AmazonbotClaudeBotPetalBotgptbotAhrefsSemrushImagesiftTeomaia_archivertwicelerMSNBotScrubbyRobozillaGigabotyahoo-mmcrawleryahoo-blogs/v3.9psbotScrapySemrushBotAhrefsBotApplebotAspiegelBotDotBotDataForSeoBotjavaMJ12botpythonseoCensys


方法三:复制下面的代码,保存为robots.txt,上传到网站根目录
User-agent: AhrefsDisallow: /User-agent: SemrushDisallow: /User-agent: ImagesiftDisallow: /User-agent: AmazonbotDisallow: /User-agent: gptbotDisallow: /User-agent: ClaudeBotDisallow: /User-agent: PetalBotDisallow: /User-agent: BaiduspiderDisallow: User-agent: SosospiderDisallow: User-agent: sogou spiderDisallow: User-agent: YodaoBotDisallow: User-agent: GooglebotDisallow: User-agent: BingbotDisallow: User-agent: SlurpDisallow: User-agent: TeomaDisallow: /User-agent: ia_archiverDisallow: /User-agent: twicelerDisallow: /User-agent: MSNBotDisallow: /User-agent: ScrubbyDisallow: /User-agent: RobozillaDisallow: /User-agent: GigabotDisallow: /User-agent: googlebot-imageDisallow: User-agent: googlebot-mobileDisallow: User-agent: yahoo-mmcrawlerDisallow: /User-agent: yahoo-blogs/v3.9Disallow: /User-agent: psbotDisallow: User-agent: dotbotDisallow: /

方法四:防止网站被采集(宝塔配置文件保存以下代码)
#禁止Scrapy等工具的抓取if ($http_user_agent ~* (Scrapy|Curl|HttpClient|crawl|curb|git|Wtrace)) {   return 403;}#禁止指定UA及UA为空的访问if ($http_user_agent ~* "CheckMarkNetwork|Synapse|Nimbostratus-Bot|Dark|scraper|LMAO|Hakai|Gemini|Wappalyzer|masscan|crawler4j|Mappy|Center|eright|aiohttp|MauiBot|Crawler|researchscan|Dispatch|AlphaBot|Census|ips-agent|NetcraftSurveyAgent|ToutiaoSpider|EasyHttp|Iframely|sysscan|fasthttp|muhstik|DeuSu|mstshash|HTTP_Request|ExtLinksBot|package|SafeDNSBot|CPython|SiteExplorer|SSH|MegaIndex|BUbiNG|CCBot|NetTrack|Digincore|aiHitBot|SurdotlyBot|null|SemrushBot|Test|Copied|ltx71|Nmap|DotBot|AdsBot|InetURL|Pcore-HTTP|PocketParser|Wotbox|newspaper|DnyzBot|redback|PiplBot|SMTBot|WinHTTP|Auto Spider 1.0|GrabNet|TurnitinBot|Go-Ahead-Got-It|Download Demon|Go!Zilla|GetWeb!|GetRight|libwww-perl|Cliqzbot|MailChimp|SMTBot|Dataprovider|XoviBot|linkdexbot|SeznamBot|Qwantify|spbot|evc-batch|zgrab|Go-http-client|FeedDemon|Jullo|Feedly|YandexBot|oBot|FlightDeckReports|Linguee Bot|JikeSpider|Indy Library|Alexa Toolbar|AskTbFXTV|AhrefsBot|CrawlDaddy|CoolpadWebkit|Java|UniversalFeedParser|ApacheBench|Microsoft URL Control|Swiftbot|ZmEu|jaunty|Python-urllib|lightDeckReports Bot|YYSpider|DigExt|HttpClient|MJ12bot|EasouSpider|LinkpadBot|Ezooms|^$" ) {      return 403; }#禁止非GET|HEAD|POST方式的抓取if ($request_method !~ ^(GET|HEAD|POST)$) {    return 403;}
添加完毕后保存,重启nginx即可,这样这些蜘蛛或工具扫描网站的时候就会提示403禁止访问。
注意:如果你网站使用火车头采集发布,使用以上代码会返回403错误,发布不了的。如果想使用火车头采集发布,请使用下面的代码:#禁止Scrapy等工具的抓取if ($http_user_agent ~* (Scrapy|Curl|HttpClient|crawl|curb|git|Wtrace)) {   return 403;}#禁止指定UA及UA为空的访问if ($http_user_agent ~* "CheckMarkNetwork|Synapse|Nimbostratus-Bot|Dark|scraper|LMAO|Hakai|Gemini|Wappalyzer|masscan|crawler4j|Mappy|Center|eright|aiohttp|MauiBot|Crawler|researchscan|Dispatch|AlphaBot|Census|ips-agent|NetcraftSurveyAgent|ToutiaoSpider|EasyHttp|Iframely|sysscan|fasthttp|muhstik|DeuSu|mstshash|HTTP_Request|ExtLinksBot|package|SafeDNSBot|CPython|SiteExplorer|SSH|MegaIndex|BUbiNG|CCBot|NetTrack|Digincore|aiHitBot|SurdotlyBot|null|SemrushBot|Test|Copied|ltx71|Nmap|DotBot|AdsBot|InetURL|Pcore-HTTP|PocketParser|Wotbox|newspaper|DnyzBot|redback|PiplBot|SMTBot|WinHTTP|Auto Spider 1.0|GrabNet|TurnitinBot|Go-Ahead-Got-It|Download Demon|Go!Zilla|GetWeb!|GetRight|libwww-perl|Cliqzbot|MailChimp|SMTBot|Dataprovider|XoviBot|linkdexbot|SeznamBot|Qwantify|spbot|evc-batch|zgrab|Go-http-client|FeedDemon|Jullo|Feedly|YandexBot|oBot|FlightDeckReports|Linguee Bot|JikeSpider|Indy Library|Alexa Toolbar|AskTbFXTV|AhrefsBot|CrawlDaddy|CoolpadWebkit|Java|UniversalFeedParser|ApacheBench|Microsoft URL Control|Swiftbot|ZmEu|jaunty|Python-urllib|lightDeckReports Bot|YYSpider|DigExt|HttpClient|MJ12bot|EasouSpider|LinkpadBot|Ezooms ) {      return 403; }#禁止非GET|HEAD|POST方式的抓取if ($request_method !~ ^(GET|HEAD|POST)$) {    return 403;}设置完了可以用模拟爬去来看看有没有误伤了好蜘蛛,说明:以上屏蔽的蜘蛛名不包括以下常见的6大蜘蛛名:百度蜘蛛:Baiduspider谷歌蜘蛛:Googlebot必应蜘蛛:bingbot搜狗蜘蛛:Sogou web spider360蜘蛛:360Spider神马蜘蛛:YisouSpider爬虫常见的User-Agent如下:
FeedDemon       内容采集BOT/0.1 (BOT for JCE) sql注入CrawlDaddy      sql注入Java         内容采集Jullo         内容采集Feedly      内容采集UniversalFeedParser内容采集ApacheBench      cc攻击器Swiftbot       无用爬虫YandexBot       无用爬虫AhrefsBot       无用爬虫jikeSpider      无用爬虫MJ12bot      无用爬虫ZmEu phpmyadmin    漏洞扫描WinHttp      采集cc攻击EasouSpider      无用爬虫HttpClient      tcp攻击Microsoft URL Control 扫描YYSpider       无用爬虫jaunty      wordpress爆破扫描器oBot         无用爬虫Python-urllib   内容采集Indy Library   扫描FlightDeckReports Bot 无用爬虫Linguee Bot      无用爬虫

TyCoding 发表于 2024-10-15 19:30:16

干 货

IT618发布 发表于 2024-10-15 19:31:15

干货满满

Crystαl 发表于 2024-10-15 19:32:10

补充一点:cloudflare,如果是BA域名,顶级域名不要开那个小黄云加速,那个是cdn,ip会变成国外的。怕到时候上面抽查取消BA。

二级域名是可以开CDN,我用了好几年了,暂时没有接到电话。

TyCoding 发表于 2024-10-15 19:32:55

有一种就是模仿百度ua来抓取页面,那种怎么防

浅生 发表于 2024-10-15 19:33:39

看看其他大佬咋说,我暂时不清楚方法

独家记忆 发表于 2024-10-15 19:34:30

不错

婷姐 发表于 2024-10-15 19:34:45

干货满满,学习了

TyCoding 发表于 2024-10-15 19:35:04

干货干货支持楼主

Crystαl 发表于 2024-10-15 19:35:47

干货满满
页: [1] 2
查看完整版本: 【免费分享】屏蔽AI蜘蛛和防止网站文章采集方法

创宇盾启航版免费网站防御网站加速服务