So off to see the new Avengers tonight, super excited about it (probably due to a large amount of Pick & Mix and Energy drinks I have already consumed!)
Before I go, I thought I would rebuild the Burf.co server as the site has been running off my desktop computer for a week or so in preparation for the new server. At the same point my MacBook Pro has also been filled up with the CommonCrawl! The reason I took it offline in the first place was that it ran MongoDB like a dog! Even running Raid 0, the full-text search was slower than my 69 year old mum! (She is really slow, bless her).
So the rebuild, I have scraped the raid 0 and put in an SSD. I am also running 2 instances of MongoDB on the same box. The server has 128gb of ram now so should be fine however this time I want 2 distinct datasets without the power cost of running 2 servers (Yes I know I can run stuff in the cloud, but look up the costs of 4tb of space).
One data set will live on the 4TB drive and will be the raw data from CommonCrawl before I have processed it. The other dataset, which will live on the SSD, will be the processed data for the search engine. The aim is to have a much smaller refined set of keywords for each page that will live in memory, and in hard times be read off the SSD. This approach also means I can reprocess the data as many times as I like, plus switch out the full-text engine (2nd instance of Mongo) for Postgres without losing the raw web pages held in the main MongoDB.
My original plan was to try and get between 1-5 million pages indexed which was more than the original Burf.com ever did. The current solution is already at 7.7 million without breaking a sweat, and the new solution I hope to hit 50 million!
I did plan to crawl the web manually before I discovered the CommonCrawl (and I may still do parts), so I bought a second had HP c7000 blade server (Its a f@cking beast, and I can’t even lift it!). However, I think it’s going to be repurposed for some machine learning stuff across the larger dataset. I cant let 16 * 4 * 2 cores go to waste even though it keeps my house warm!
So next steps for Burf.co
- Move all the data from the other machines on to new server and fire up the current Burf.co
- Get 4TB of CommonCrawl web data and process it
- Build a new search algorithm
- Make the site sexy!