BHW user mikeybobikey asked me to rerun my original captcha testing with the Captcha Sniper v4 beta in place of CS 3. I was happy to oblige. If you'd like to read the original round of testing here it is: http://www.blackhatworld.com/blackhat-seo/black-hat-seo/547611-captcha-breaker-vs-captcha-sniper-texas-death-cage-match.html Below are the results of ~24 hours of testing I did with both solvers. The testing was done on my VPS with Berman Hosting. I am a happy Berman customer and if you're looking for a VPS they deserve consideration. Each solver was run in conjunction with GSA's Search Engine Ranker software. For this project I ran a campaign on 4 sites with 3 tiers apiece - same settings as in my previous round of testing. Tale of the tape: Catpcha Sniper v4 beta alone for 24 hrs: SER reports 27285 link submissions 436 verified CS says it "attempted" 9293 of 12861 (72%) solved with an average time of ~0.065. Captcha Breaker alone for 24 hrs: SER reports 25302 link submissions, 397 verified CB says it "recognized" 8467 of 11320 with average time of 0.221. Note that overall numbers were down for both solvers when compared to the first round of testing. I suspect the keyword batch is going a bit stale. CS bettered CB by about 10% both in submissions and verified links. CS went first. Testing time was very close to 23 hours 45 minutes for both solvers. The testing time was shortened just a bit as Sven pushed out a pair of updates to SER during each solver's testing period. Each of the update pairs came at about the same time. The CSv4b reports a dramatically shorter average solve time then CS3. Again this should be taken with a grain of salt. The code is not open to inspection. But if you sit in front of CSv4b and watch it for a minute or five its not hard to see that it is faster then CS3. There are a large number of captcha types. I'm sure that at some types CS is better then CB and vice versa. Test results are in some measure dependent on the types of captchas encountered on the platforms and targets. I'd bet that on my first tier my projects probably don't share so much in common with the average SER user. On the lower tiers my stuff is going to look a lot more like yours. Your mileage may vary. Another thing I noticed. The CPU % numbers that SER reported when CSv4b was active were much lower then CS3 and definitely lower then CB. I don't know why this would be. Maybe Sven could address this. I don't really know what CPU % represents - the total load on the CPU or just SER's? But the % is low enough that I'd feel comfortable upping my SER thread count with CSv4b. For both this testing project and the previous one I used 30 threads. I may up that to 40 with CSv4b. That should increase performance but the relationship between thread count and link submission almost certainly ain't linear so caveat emptor. I'm currently running a test of SER using both CSv4b and CB. Previous testing indicated that either solver working alone is more productive then using both. Maybe this time it will be different. In any case its clear that CSv4b is a big improvement over CS3. Its nicer to look at, its faster, it offers more control. It does *not* have a self-updating facility like either SER or CB. Mikey has said the CSv4 will have a menu option to check for an update but not actually download and install it for you. You can see the menu option in CSv4b. I didn't test it.