Categories
Tech

Windows Rogue DHCP Monitor

I wanted to monitor the networks of a couple dozen clients for rogue DHCP servers. I couldn’t find a suitable application for my needs, so I made one here. It is designed to be deployed to Windows Servers configured as DHCP servers.

It was only after I made the application and bragged about it on IRC that I was told that Windows already has rogue DHCP server detection, lol. Oh well. I’ll keep this up since it’s pretty neat to me.

How it works is pretty simple. For each IPV4 interface of a Windows computer, it sends a DHCP packet(the contents of which were just copied from Wireshark and put into a broadcast.bin file) and listens for DHCP responses. Then it compares the system’s IP addresses to the seen DHCP servers. if there are seen DHCP servers that aren’t on the local system, it returns those IP addresses.

It was released with the MIT license, feel free to use it at work or whatever else.

Example usage in an administrative command prompt:

C:\Path\To\Executable\windows_rogue_dhcp.exe

It’s that simple at the moment. Be sure to run it in the same folder as the broadcast.bin file. Eventually there may be more features, but not yet.

Below is a compiled 64 bit executable, which isn’t guaranteed to be up-to-date. Please note that it must be run in the same folder as the broadcast.bin file.

Categories
Tech

Bill Gates Stimulants

I web search a lot of really random stuff. A large amount of what I know about the world stems from what I see on the site.

Because of how important search results are to my opinion of the world, I was quite surprised when I did a search on Duckduckgo for

Bill Gates Stimulants

It returned absolutely nothing relevant, which isn’t particularly surprising. The billionaire philanthropist undoubtedly has a spectacular PR team protecting his reputation. Often times, I can get around this by narrowing my search. I searched again, this time using quotes to force specific words and phrases to be found:

“Bill Gates” “stimulants”

Zero results found.

This result honestly shocked me. You’re telling me that in the entirety of the internet, there has been no insane blogger that has gone on a conspiratorial tirade about how Bill Gates’s stimulant abuse is the direct cause of 5G-related autism, leading to the future collapse of America because of the impending zombie apocalypse caused by vaccines?

It makes me wonder what else is manipulated by the results of web search engines. When searched with the same query, Google returned somewhat relevant information, though not what I was hoping for. Looks like Gates has (understandably) not discussed his thoughts on stimulants.

Part of the purpose of this post is to see if it ends up indexed by the search engine, or if it is a topic that is brought down somehow. I guess we’ll find out soon.

Categories
Tech

Barking Up the Wrong Tree

Social power is the capacity to influence the behavior of others. Explicit power is given by rigidly defined social heirarchies like President or CEO, while implicit power is influence gained by other means, sometimes taken by force but usually earned.

Many people with explicit power think that their power reaches farther than it does. Such distortion is caused by repetition; if they work all of most days and leverage their power to facilitate work, of course that mentality of influence will at some point become second nature.

I think this blending in the mind between explicit and implicit power is why people in powerful positions tend to be seen as jerks. It’s a stereotype to see a rich fart weaving in and out of traffic with his BMW or yelling at a McDonald’s worker because her fries aren’t done in time. They try to leverage their power to influence the behavior of other people or get away with actions that hurt others, but because that power isn’t given explicitly, it’s interpreted as an imposition by those beneath their tirade. We don’t see the thousands of times that they leverage that power to get real work done.

The the positive aspects of power don’t excuse borderline-sociopathic behavior. I suppose I haven’t had enough power to really see a solution. However, just as a pedestrian doesn’t need to be an engineer to point out that a house shouldn’t be leaning, I don’t need to know the solution to see that it’s a problem.

Categories
Tech

Feldot – a failed but fun side project

A few months ago, I released a site called Feldot. It was a novel website discovery application that has unfortunately failed to grow. Because it failed, I’ve decided to discuss its inception, design choices, and why I think it failed.

Eight months ago now, I created a toy called Randomsite, blog post here. I used a port scanner to find random web servers on the internet that I stuck in a sqlite database. I learned the very basics of a web application framework called Django, just enough to get something functional to redirect sites, and I set the site up so that people were redirected to random web servers when they clicked the link.

After posting the site on Hacker News and getting reupped by dang, the post took off and I ended up with thousands of people and bots checking out my site. I had created plenty of software, I’d created various tools and toys since I was 14. However, this was the first time people actually used something I created, and it made the entire experience 100x more enjoyable.

Not only did it get seen, but I got feedback on what needed to be better. Originally, the software added servers to the database that returned error messages, enough that people complained. With that feedback, I removed many of those sites and immediately saw a huge spike in errors from my site. I broke something! I scrambled, fixed a few bugs in my code(it involved “random” selection from the database that assumed there weren’t gaps in the id field), and drastically improved the experience by using feedback. The entire experience was incredible, and I wanted to do it again.

I thought for a long time about how to make novel website discovery interesting, social, unique, and better than what I already had made. I decided I wanted a reddit-like site, but instead of having posts link to URLs like reddit.com/r/funny, they would instead link to only domain names like reddit.com. An issue here is the difficulty of new-site discovery. I knew I had to combine the site with a tool that made new sites easy to find.

A big issue with the old site as it was is that the end users were connecting directly to an IP address, their request lacking a URL. Most web servers require a URL to be passed to the web server. Nginx, the web server I used, has Name-based virtual hosting, and other web servers have something similar. This configuration allows for many websites to be hosted on a single IP address, saving on cost and better utilizing computer resources. Since my toy excluded server names, it excluded every single website in this configuration, which is most. To find most websites, I needed access to the zone files, a list of all registered domain names in the world. I had to jump through some hoops and cut through red tape, but after a few weeks, I ended up with the ICANN zone files for the most popular(in the US) zone files, including all .com names. The fun work could begin.

I randomized and filtered the zone file data so that only the URLs were placed into a sqlite database. I then made a series of python and shell scripts that pulled the next line of the database and queried for a web server on the URL. If it responded, didn’t return an error, didn’t have a bunch of numbers in the URL, and passed through several other filters, I saved its url, IP address and first 100 bytes of html data to another database.

After several thousand sites were saved, I noticed an issue on review. There were a lot of spam sites that needed to be filtered out, but exact match and delete scripts didn’t work because there were small variations between a lot of different sites. Long story short, I used a python module called difflib to do a fuzzy comparison to other known spam sites and deleted them if they were within a certain threshold, and as this comparison was computationally expensive, I parallelized the computation using the multiprocessing module so that it wouldn’t take a few weeks to complete.

Eventually I ended up with a list of 100,000 interesting sites with good enough signal-to-noise ratio to get the site started. The database with 100,000 sites was put onto the postgresql database, and the explore section of the site was read sequentially. That just left the reddit-like front page.

I won’t go too much into the creation of the reddit-like front page. The most recent 10,000 posts are calculated every now and then, calculations are done based on a function of time, up/downvotes, and moderator inputs, and the order of the sites is cached with Redis. Loading the front page queries Redis, with an offset based on what page you are on. There are plenty of posts on how reddit was created, feel free to check those out.

The site was posted, and the site took off about as well as a rock takes off into space. Looking at the site, it isn’t surprising. Making the site, I tried to focus entirely on UX, making the site simple and to use on mobile, making it extremely fast to respond. Looking back, I think more of a focus on appearance and UI would have done some good.

This site depends heavily on network effects: There needs to be enough people posting interesting content to keep the site self-sustaining. Ultimately, this site did not get there. With some reconfiguration, UI updates, and marketing, it might be able to grow, but the complete lack of traction doesn’t make me want to do that. I take solace in the fact that while the site didn’t take off, I learned enough in the process that I could finally make a long, rambling blog post of my own.

Categories
Tech

Random Web Server App

When I was younger and had just learned to use nmap, it was a hobby of mine to scan the internet and browse random web servers. At the time, I used very aggressive scans( nmap -A -iR 500 –top-ports 30 –open ) and found some incredibly interesting servers(It is amazing how many computers on the internet are blatantly compromised). When I first started doing it, it was a completely manual process: I would start the scan, scroll through pages of IP addresses and port information, and manually copy and paste interesting IPs and ports into Firefox. Eventually I automated the process using a shell script and some python that scanned and filtered IP addresses, noted interesting ports, and automatically opened a browser to the IP.

Between a couple dozen OS installs, this script was lost. To remake this experience, I made randomsite.lhackworth.com. It’s some spaghetti code slapped together using nmap, Django, uwsgi, and Nginx that does something similar to what I used to do, albeit less noisily. It scans IPV4 addresses for an open port 80 and sticks seemingly good IP addresses into a database. Then when you go to randomsite.hackworth.com/go/, it redirects you to a random IPV4 web server.

It does not aggressively scan like the old scripts did, it only scans for servers at port 80. Unfortunately, since port 80 is the correct port for web servers, many of these sites are much less interesting than the sorts of things I found back in the day. At the request of a few users on Hacker News, I implemented a filter to prune out sites with certain errors like 400, 404, 500, and others, which greatly improved the signal-to-noise ratio of the redirects. It was fun to cobble together. This site is now unmaintained, and the site pruning software has now been disabled. If you want specifics on how I did it, email me at contact@lhackworth.com.