
Let's create a more automated version of the previous job by adding a loop at the end: Right now the job is handy but it will only do a single scraping run and then stop. This is just a basic example but you get the idea. So you can create a pretty simple job for scraping that does the following: And if Scrapebox gets stuck on the first run without working proxies, the rest of your scrape naturally won't work. They won't live long enough for you to complete a single scraping run. I've already talked about this but the tl dr version is: public proxies suck. I personally put it in the root of C on my VPS and recommend that you do the same: The most important thing you should do is create a folder for your Automator. There's not much setup needed, better there are couple of things to keep in mind while using it. We won't go over each feature in detail in this article since there's already a video about this made by loopline from Scrapebox explaining everything about Automator:Īlso, while the Automator can work with most of the tasks in Scrapebox, in this article we will be focusing exclusively on the harvesting part. If you want the long answer then go check out my last article Scrapebox Scraping Tutorial – Easy 56 Million Links / Day. The short answer is.use private proxies and scrape from Bing. That means you need to be able to scrape with as few errors as possible for long periods of time. In order to use Automator with any degree of success, you need to be able to scrape reliably.
