I found a tool called "TorCrawl" to help index Ransomware posts. However, that's all the tool can do and does not enable me to download data leaks as evidence. I used AI to help me write a Python script I named "torfetch" which uses the index from "TorCrawl" and automates downloading of data leaks from the Ransomware blog.
Security Republic
Securing the world one entity at a time.
VM for ransomware investigations
My laundry list of tools/software useful when investigating ransomware cases.
Tor: Obviously need this to access Onion sites.
qBittorrent: Download leaked data from Torrent files.
Peazip: Extract archives containing leaked data.
unrar: Extract archives containing leaked data.
LibreOffice: Read leaked documents.
SSH: Transfer files from VM to host.
Any suggestions what other tools you guys use?
dnssecaudit.py
Since I was on a roll with Copilot, I decided to automate DNSSEC auditing with the following Python script. Not the most creative tool name.
import subprocess
import sys
import dns.resolver
import datetime
def check_prerequisites():
try:
import dns
except ImportError:
print("The required module 'dnspython' is not installed. Installing it now...")
subprocess.check_call([sys.executable, "-m", "pip", "install", "dnspython"])
print("Installation complete. Please restart the script.")
sys.exit()
def check_dnssec(domain):
resolver = dns.resolver.Resolver()
resolver.nameservers = ['1.1.1.1'] # Use a reliable public DNS server
try:
answers = resolver.resolve(domain, 'DNSKEY')
if answers:
return True
except (dns.resolver.NoAnswer, dns.resolver.NXDOMAIN, dns.resolver.NoNameservers, dns.exception.Timeout, dns.resolver.NoRootError):
return False
def main():
check_prerequisites()
domains = []
choice = input("Enter '1' to input a single domain or '2' to input a list of domains from a file: ").strip()
if choice == '1':
domain = input("Enter a domain name: ").strip()
domains.append(domain)
elif choice == '2':
file_path = input("Enter the file path containing the list of domains: ").strip()
try:
with open(file_path, 'r') as file:
domains = [line.strip() for line in file] #improved handling of whitespace in input file
except FileNotFoundError:
print(f"File not found: {file_path}")
return
else:
print("Invalid choice. Please restart the script and enter '1' or '2'.")
return
results = []
for domain in domains:
if check_dnssec(domain):
results.append(f"{domain}: DNSSEC enabled")
else:
results.append(f"{domain}: DNSSEC not enabled")
date_str = datetime.datetime.now().strftime("%Y-%m-%d")
report_filename = f"dnssecaudit-report-{date_str}.txt"
try:
with open(report_filename, 'w') as report_file: #Corrected section
for result in results:
print(result) # still print to console
report_file.write(result + '\n')
print(f"Report saved to {report_filename}") #Added confirmation message
except Exception as e: #Catch potential errors during file writing.
print(f"An error occurred while writing the report: {e}")
if __name__ == "__main__":
main()
Modded script to keep Ubuntu packages and snaps updated.
Used Copilot to update my Ubuntu maintenance script. I did tweak it slightly though since I am running LTS and don't want all packages to be updated to the latest version.
#!/bin/bash
# Function to handle errors
function handle_error {
echo "$1 Exiting."
exit 1
}
# Function to update apt packages
function update_apt {
echo "Updating apt package lists..."
sudo apt update || handle_error "Error updating apt package lists."
echo "Upgrading apt packages..."
sudo apt upgrade -y || handle_error "Error upgrading apt packages."
echo "Cleaning up apt packages..."
sudo apt autoremove -y && sudo apt clean || handle_error "Error cleaning up apt packages."
}
# Function to update snap packages
function refresh_snaps {
echo "Updating Snap packages..."
sudo snap refresh
if [[ $? -ne 0 ]]; then
echo "Refresh failed. Attempting to kill running Snap processes..."
sudo pkill -f snap
sudo snap refresh || handle_error "Error updating Snap packages after killing processes."
else
echo "Snap packages updated successfully."
fi
}
# Function to update Maldet database and run a scan
function run_maldet {
echo "Updating Maldet database..."
sudo maldet -u || handle_error "Error updating Maldet database."
echo "Starting Maldet scan of /home (recent changes, quiet mode)..."
sudo maldet -r -q /home || handle_error "Error running Maldet scan."
SCAN_LOG=$(sudo maldet --report list | tail -n 1 | awk '{print $NF}')
if [[ -n "$SCAN_LOG" ]]; then
echo "Maldet scan log located at: $SCAN_LOG"
else
echo "Could not retrieve Maldet scan log location."
fi
}
# Main script execution
update_apt
refresh_snaps
run_maldet
echo "All done!"
Autobots.py
Had an epiphany to try writing a working "Python" script using "Copilot". I call the following script "Autobots" to help audit the presence of "robots.txt".
import requests
import os
from urllib.parse import urlparse, urljoin
def fetch_and_save_file(url, file_type, domain):
"""Fetches and saves a file, prepending the domain and returning success status."""
try:
response = requests.get(url, timeout=5)
response.raise_for_status()
parsed_url = urlparse(url)
path = parsed_url.path.lstrip("/") #remove leading slash to avoid issues
path_parts = path.split("/")
if path_parts and path_parts[-1] in ("robots.txt", "robot.txt"):
path_parts.pop() #remove last part if its robots.txt or robot.txt
sanitized_path = "_".join(path_parts)
filename = f"{parsed_url.netloc}_{sanitized_path}_{file_type}.txt"
filename = "".join(c for c in filename if c.isalnum() or c in "._-") # improved sanitization
output_dir = "output"
os.makedirs(output_dir, exist_ok=True)
filepath = os.path.join(output_dir, filename)
with open(filepath, 'w', encoding='utf-8') as file:
file.write(f"# Domain: {domain}\n")
file.write(response.text)
print(f"Downloaded {file_type} from {url} to {filepath}")
return True
except requests.exceptions.RequestException as e:
print(f"Error downloading {file_type} from {url}: {e}")
return False
except OSError as e:
print(f"Error saving file: {e}")
return False
def check_robots_txt(url):
"""Checks for robots.txt and robot.txt, returning success counts."""
try:
parsed_url = urlparse(url)
if not parsed_url.scheme:
url = "http://" + url
parsed_url = urlparse(url)
domain = parsed_url.netloc
robots_success = 0
robot_success = 0
if domain:
robots_url = urljoin(url, "robots.txt")
if fetch_and_save_file(robots_url, "robots.txt", domain):
robots_success = 1
robot_url = urljoin(url, "robot.txt")
if fetch_and_save_file(robot_url, "robot.txt", domain):
robot_success = 1
else:
print(f"Invalid URL Format: {url}")
return robots_success, robot_success
except Exception as e:
print(f"An unexpected error occurred in check_robots_txt: {e}")
return 0, 0
def process_urls(urls):
"""Processes a list of URLs and summarizes results."""
total_robots = 0
successful_robots = 0
total_robot = 0
successful_robot = 0
for url in urls:
print(f"Checking URL: {url}")
robots_s, robot_s = check_robots_txt(url)
total_robots += 1
successful_robots += robots_s
total_robot += 1
successful_robot += robot_s
print("-" * 20)
print("\n--- Summary ---")
print(f"Checked {total_robots} robots.txt files. Successfully downloaded: {successful_robots} ({ (successful_robots/total_robots)*100:.2f}%)" if total_robots >0 else f"No robots.txt files checked.")
print(f"Checked {total_robot} robot.txt files. Successfully downloaded: {successful_robot} ({ (successful_robot/total_robot)*100:.2f}%)" if total_robot >0 else f"No robot.txt files checked.")
if __name__ == "__main__":
while True:
try:
choice = input("Enter '1' to input URL manually or '2' to read from a file (or 'q' to quit): ")
if choice == '1':
url = input("Enter the URL: ")
process_urls([url])
elif choice == '2':
file_path = input("Enter the path to the text file: ")
try:
with open(file_path, 'r', encoding="utf-8") as file:
urls = [line.strip() for line in file.readlines() if line.strip()]
process_urls(urls)
except FileNotFoundError:
print(f"File not found: {file_path}")
elif choice.lower() == 'q':
break
else:
print("Invalid choice. Please enter '1', '2', or 'q'.")
except Exception as e:
print(f"An unexpected error occurred: {e}")
print("\n")
VMware Workstation Pro is now free for personal use!!!
Brute force
Been awhile since I've performed a brute force attack. In this demo, I use "Hydra" from "Kali" to attack my test "Virtual Machine" (VM) running "File Transfer Protocol" (FTP).
Simple session hijacking demo
Been a long time since I've had to demo "session hijacking". Picked DVWA as the vulnerable web application to demonstrate "cookie theft" and "session hijacking" using "Burp".
Scenario: A man-in-the-middle (MiTM) scenario is where a "Hacker" positions themselves between a client and server. In a successful MiTM situation, the "Hacker" can use a "web proxy" like "Burp" to intercept traffic between a victim and web application. The "Hacker" is able to capture the victim's post-authentication cookie to impersonate the authenticated victim.
Simple file carving demo
Been awhile since I've done hands on "file carving". I was pleasantly surprised that it is so much easier now to "carve" files from "Wireshark".
Scenario: A "Hacker" is at an open Wi-Fi operated by a Cafe. The "Hacker" uses "Wireshark" to capture network traffic traversing the wireless network. One user transfers an "Excel Spreadsheet" containing personal data onto an FTP server. The "Hacker" is able to successfully "carve" the transferred file from the network packets captured.
"Snap" update issue
"Ubuntu" uses "Snap" for "Firefox" by default since 22.04 which has this annoying "pop-up" warning every other day. I wrote the following script to aid upgrading of "Snap" apps.
#!/bin/bash
sudo killall firefox
sudo snap refresh
echo -e "\nIf specific Snap app is still pending update, please use the following commands.\nsudo snap refresh <appname>\nkill <pid>\nsudo snap refresh"
DoH update
Support for DNS over HTTPS (DoH) in browsers has improved since I last researched it.
In "Brave", it is just a simple click to enable it.
For other browsers like "Firefox", you can refer to this link to enable DoH using "OpenDNS".
However, the "OpenDNS" option fails for some sites so I switched to "Cloudflare" instead.
"torfetch"
I found a tool called "TorCrawl" to help index Ransomware posts. However, that's all the tool can do and does not enable me t...
-
This annoying message popped up after I ran the update in avast! in Ubuntu yesterday. avast! crashes every time I attempt to launch it after...
-
I've used Nessus for years. I only recently heard of NeXpose after Rapid7 started funding Metasploit and promised to integrate their sca...