Can one of you help duckduckgo build a better spider/engine either through distributed computing or through a new "cpanel" server software As the owner of a small business who relies on organic and paid Google traffic to make a living I have grown to hate Google and their "search bubble" "algorithm changes" to produce "better results" and their constant insistence to properly manage "free services" like Google "places" or "maps" whatever the shit they are calling it this week that forces me to spend more and more on paid listings while my competitors rack up fake listings in places that never get taken down no matter how many times I report them. I have gone looking for a better search engine to promote and came across duckduckgo.com this site gives awesome relevant results does not track you and has the great feature that there is no "search bubble" meaning the results you see when you search are exactly what I see when I search. The only problem with this search engine (aside from their name which although cute is impossible for people to remember thereby limiting its popularity) is they lack the resources to crawl the web on their own and rely on harvested results from other search sources. In communicating with the duckduckteam I learned that crawling is expensive for companies as it is resource intense requiring endless server racks and highspeed connections that are not cheap to rent and way too expensive for small companies to purchase. So someone needs to come up with a way for the duckduckteam to crawl the entire web cheaply. I have an idea in this regard that I have no ability to initiate on my own ( I lack the know how to cross post to other subredits never mind code my ideas). Idea 1 Use a distributed computing service much like the SETTIathome service that crawls the web using the computing power of a worldwide net of volunteer machines to give the duckduckteam the power they need to be a serious competitor to google idea 1A Do the same as above and compliment it with a mobile app that harness all the tablets and phones in the world. If this app became widespread the entire web could be crawled in no time at all. Idea 2 The way I understand most search spiders work is they visit a link on a web page then visit all the outbound links on that page and on and on.. .This has an obvious shortcoming in the there is content out there that is not linked to and does not link to anything else. The argument could be made that if it does not link to anything and nothing links to it then the content is probably worthless but why take that chance? If our endeavor is to remake search into the best search ever lets do the job. There is software called cpanel which is used to manage a rather large chunk of the linux based servers in the world. An add on for cpanel could be coded to crawl all of the content on these servers. If Cpanel won't play along then a Cpanel substitute could be coded. Idea 1B Using the earlier discussed distributed model it would be easier to crawl every possible IP address in the world for all content on that IP. Every website and device has an IP right? I know ipv6 is coming and this will make the job harder but the Cubs will also eventually win the world series and if we wait long enough Jesus may make a comeback also... meaning ipv6 is a great idea I just don't see it replacing ipv4 anytime soon. I am just some taxi driver that sees that search could be make tons better if only someone with the ability to make it better would do it. Duckduckgo is on the right track as far as privacy and all that they just lack the resources to do the job right. Why don't one of you help them out?