Scan with a Crawler

Nexploit can crawl your web application to define the attack surface. This option does not require any details that might get you tangled. To run a security scan using a crawler, you simply need to specify the target URL in the Targets field.

You can scan either the whole application or its parts. To scan only specific parts of your application, click at the right side of the Targets section to add multiple URLs. In this case, only the specified sections of the application and everything downstream from them will be scanned.



To ensure complete coverage of the scan, you should configure an authentication object so that the crawler can reach the authenticated parts of the target application. See Managing Your Authentications for detailed information.

Pros Cons
Simple usage. You simply need to specify the target host. The crawler will define the attack surface (scan scope) automatically. Less coverage. The crawler cannot get through some user-specific forms or provide the required input. It means that the attack surface may be defined incompletely, and Nexploit will not cover such parts of the application during scanning.
Full automation. By default, the crawler automatically covers all the application parts that it can reach. Limited coverage level. The crawler is applicable only to web pages of the target application and the microservices connected to the application directly. Crawling between the microservices is disabled.



You can combine full automation with complete coverage by applying both the Crawler and Recorded (HAR) discovery types for a scan.

Did this page help you?