Federated search is different from "centralized search" in that no central server downloads all the documents and indexes them. Instead, search requests are distributed to all the participating servers and results are collated by the originating server.
* each each server controls what can be found
* each server controls how often it can be searched
For the search engine, this makes /spiders/ unnecessary: you no longer need to crawl the web, making all those useless requests in order to see what's out there.
The search engine no longer requires all the space to store a copy of every document it has indexed, or some compressed variant of it.
All this [ToxicData data is toxic]: it makes the search engine backend a valuable target since it not only contains the key to the kingdom, it contains a copy of the kingdom!
The document servers no longer need a [[RobotsDotTxt|robots.txt]] file to tell complying spiders what documents to index and what documents to exclude, how much delay to use between requests, and so on.
Compliance with the [[Wikipedia:Robot exclusion standard|robot exclusion standard]] is optional, unfortunately: many spiders have bugs, or wilfully ignore the instructions. Thus, servers might also require scripts to watch their load, password-protect sections, or block user agents, or block IP numbers. So much fucking work! This system truly leads to a /lot/ of wasted time.
* [https://alexschroeder.ch/wiki/2020-12-25_Defending_against_crawlers Defending against crawlers], where [[Alex Schroeder]] argues that [[Gemini]] can do without (unsuccessfully)
* [https://alexschroeder.ch/wiki/2020-12-22_Apache_config_file_to_block_user_agents Apache config file to block user agents], one of the many defences against spiders used by Alex
* [https://alexschroeder.ch/wiki/2019-06-26_Privacy_vs._fail2ban Privacy vs. fail2ban], where Alex realizes that he cannot remove IP numbers from the log files if he wants to use fail2ban to throttle misbehaving spiders
In a world of federated search, we would still have to defend ourselves, sadly. Having federated search doesn't mean that centralized search (and all the SEO and marketing bots) would disappear. :(
[new:TimurIsmagilov:2021-05-08 18:41 UTC]
This idea is nice. Many systems provide their own search already, why not just connect them into a federation?
Also, not sure if this is a right place for this, but here are some interesting search engines:
* https://wiby.me searches the classic web
* https://lieu.cblgh.org searches webrings (self-hosted)
Alex, shall we start some federated wiki search standard? I'm planning to implement some searching capabilities for myco, might as well standardise it. Let's call it ProjectHaustoria. Create the page, if you are interested.
This change is a minor edit.
Please make sure you contribute only your own work, unless you fully understand the copyright implications of submitting someone else's work. By contributing here, you grant CommunityWiki permission to publish your work under the terms of the CommunityWikiLicense.
Empty lines separate paragraphs. Paragraphs may span several lines. Asterixes ('*') introduce list items. One list item per line. Plain URLs get hyperlinked. Words in camel case (mixed case) are transformed into local links. (See text formatting rules for more.)
If you want to keep your IP a secret, you need to use Tor.
To save this page you must answer this question:
What is the password of 2021-07-22?
What is the password of 2021-07-22?
Replace this text with a file
Languages: en de fr it pt