A protection against known harmful bots and scrapers, this service is included in all our services.
Bots are a necessary and integral aspect of running virtually any service or hosting material on the internet. Not all bots represent a threat to legitimate users and their data. Many bots perform valuable functions such as indexing and monitoring. However, bots - in conjunction with scrapers - are also frequently used for nefarious purposes such as competitive intelligence gathering, data mash-ups, establishing fraudulent websites, probing system vulnerabilities, analyzing financial information, location tracking, and the theft of valuable or sensitive information. Bots and scrapers typically swamp websites with thousands of requests per second, thereby reducing their speed, capacity or even availability for legitimate users.
The most effective way of combating unauthorized bot behaviour is by reducing their efficiency - slowing them down and restricting the amount of content they are able to scrape. It is, however, not always desirable to simply block bots, some of which perform tasks that support commercial or other organizational functions. The Threat Intelligence Module is able to identify and classify bots, allowing them access within specified thresholds before they are blocked or directed to a “waiting room”, leaving legitimate access unaffected. We combat bot and scraper threats in nine separate steps (1-9) in the four phases - all in a matter of seconds. Firstly, a series of static measures is utilized in our Threat Protection Centers (TPC), then dynamically outside our TPCs, dynamically within our TPCs, and finally using a reCAPTCHA Challenge. This is performed quickly to minimize disruption for legitimate users.
Threat Intelligence is included in our platform services and protect all Internet facing assets.