Expanding the Investigation
In Part One, the focus was on quickly identifying attacker tooling and vulnerable endpoints using a lightweight Python script. That approach proved effective for early triage, but it only answered part of the story.
In this second phase, the goal shifted from what was attacked to what the attacker achieved.
Specifically, I wanted to answer questions that mirror deeper incident response work:
- Was any data actually stolen?
- Did exploitation lead to system access?
- How did the attacker move from the web application to the underlying server?
Rather than rewriting the script from scratch, I continued building on the same philosophy: incremental analysis supported by automation, not replaced by it.
Investigative Focus for Part Two
Instead of expanding the script to capture everything, I narrowed the scope to three key outcomes:
- Credential compromise
- Data exfiltration
- Host-level access
This meant correlating web logs, FTP logs, and authentication logs to reconstruct the attacker’s progression.
The investigation focused on:
- Response codes indicating success vs. failure
- Abnormal query strings suggesting exploitation
- Repeated access patterns across different services
- Evidence of lateral movement beyond the web layer
Web Application Abuse and Data Exposure
The attacker’s movement through the application showed clear intent and escalation.
After scraping user email addresses from the product reviews section, the attacker successfully brute-forced the login endpoint. This was confirmed by a transition from repeated authentication failures to a successful login at:
11/Apr/2021:09:16:31 +0000
With authenticated access established, the attacker then targeted the /rest/products/search endpoint using automated SQL injection techniques.
The query strings observed were long, encoded, and characteristic of sqlmap, strongly indicating automated exploitation rather than manual testing.
Through this vulnerability, the attacker was able to retrieve:
email, password
This marked the transition from reconnaissance to confirmed data compromise.
Data Exfiltration via Misconfigured Services
Following the SQL injection, the attacker shifted away from the web application and toward backend services.
FTP logs revealed that the attacker accessed the server using anonymous FTP, a critical misconfiguration. Through this service, they attempted to download sensitive backup files, including:
coupons_2013.md.bakwww-data.bak
The use of anonymous FTP meant no credentials were required, significantly lowering the barrier to data exfiltration once the service was discovered.
Host-Level Access and Shell Compromise
The final stage of the attack occurred at the system level.
Authentication logs showed extensive SSH brute-force activity against the www-data account. After numerous failures, the attacker successfully authenticated and obtained shell access using:
ssh, www-data
At this point, the attacker had moved fully beyond the application and into the operating system, completing a classic web-to-server compromise.
Why This Matters from a Blue-Team Perspective
This challenge illustrates how small weaknesses compound:
- Publicly accessible APIs exposing user data
- Weak authentication controls
- SQL injection vulnerabilities
- Anonymous FTP access
- SSH exposed to brute-force attempts
None of these issues alone guarantee compromise but together, they form a clear attack path.
From a detection standpoint, the most important lesson is correlation. No single log file told the full story. Only by combining web, FTP, and authentication logs was it possible to reconstruct the attacker’s full kill chain.
Reflections on Automation
Just like in Part One, automation played a supporting role rather than a deciding one.
The Python script helped surface indicators quickly, but analyst judgment was still required to:
- Interpret intent
- Confirm success vs. noise
- Understand attacker progression
This balance mirrors real-world SOC work, where tools accelerate investigation but do not replace analytical reasoning.
Closing Thoughts
Part Two reinforced that effective detection isn’t about building the most complex tooling — it’s about asking the right questions and using automation to answer them efficiently.
This phase of the challenge demonstrated how a web application attack can escalate into full system compromise when defensive gaps align, and why layered monitoring is critical in real environments.
This concludes the second part of the Juicy Details series, where the focus shifted from identifying attacker activity to understanding impact and outcomes.
