don-code12 hours ago
I wrote a web application in an internship, circa 2011. I had no existing platform/framework to work with, no mentorship (the team wasn't really prepared to support an intern), and most importantly, an Apache web server running in Cygwin, with no PHP runtime installed. No one as much as told me what language I'd be writing at this job.
The Web development I'd done up to that point consisted of raw HTML/CSS, with some ASP.NET or PHP running on the backend. I'd never written a line of JavaScript in my life.
It was at this point that I "discovered" a winning combination: HTML, CSS, and JavaScript running in the user's browser. The backend was a set of C# applications which wrote to standard out, which could be invoked directly by Apache's mod_cgi, since C# compiles down to Windows executables. There were countless better other solutions at this point - ASP.NET/PHP (as I'd already used). FastCGI, WSGI, and others were all a thing at this point, but I'd never heard of them.
I outputted a JavaScript object (I had no idea what JSON was at the time, or that I was effectively outputting it) and read it back into the browser using a thin wrapper around XMLHttpRequest. I then iterated over the outputm and transformed the data into tables. JQuery was a thing at that point, but likewise, I'd never heard of it.
Say what you will about the job, the team, the mentorship (or lack theorof) - it took them three months before they realized I'd written C# at a Java shop, and at that point the thing was already being used widely across engineering.
The important takeaway here was, that "winning combination" of some minimal JavaScript and CGI was the perfect ratio of simple, approachable, and powerful, to enable me to finish the task at hand, and in a way that (at least until anybody saw the architecture) everybody was thrilled with. It didn't require a deeper understanding of a framework to bootstrap it from nothing. Write an HTTP response to standard out, formatted as an object, and you were on your way.
lenkite8 hours ago
This architecture is also wonderful for diagnosis and investigation. You have effectively broken down the problem into CLI tools that can be independently tested and invoked with problematic requests.
dspillettan hour ago
It is also the pattern many APIs and function-based architectures are embracing again, partly for that reason (directly, or as a side effect of good decoupling being a core goal).
Bender3 hours ago
Nice. Might this eventually be incorporated by F5's or Maxim's NGinx? I am also curious if any performance testing has been done on this vs. fcgiwrap both in terms of requests per second and also socket limits and memory usage.
BirAdam3 hours ago
I’ve just used fastcgiwrap for this purpose with nginx. I hats the advantage of this?
Edit: nvm. It looks like it’s just easier to configure.
[deleted]14 hours agocollapsed
electroglyph12 hours ago
OpenResty + Lua ftw
klibertpan hour ago
Yes, but.
OpenResty is a platform with multiple advantages. It's fast (Nginx), it's async via stackless coroutines (no function coloring), it's again fast (LuaJIT), it's relatively easy to deploy, it's feature-rich (all the Lua packages available), and it integrates well with some other tech like Redis (also Lua-enabled).
On the other hand, Lua is an extension language. It's not designed to support large codebases. You can, of course, with enough discipline, make it work - just as you can, with enough grit, make Perl, JavaScript, Ruby, or older Python work, but you're on your own then. You need to invent your own code organization scheme and adhere to it religiously. You need to reinvent half of the stdlib that Python provides out of the box. The reinventing process extends to the need to create a set of helpers to define classes and inheritance between them, which are only provided as powerful but inconvenient primitives (metatables) in Lua.
It's incredible how much you can do with just 100 sloc in OpenRESTY - it's absolutely amazing as a component of a larger system. However, writing lengthy, uninspiring, yet complicated business logic in Lua under OpenResty is not a good idea.
lemcoe912 hours ago
Absolutely. For me, OpenResty combined with a custom Lua script solved an incredibly complicated business problem that I ran into a couple years ago, and now that arrangement serves thousands of complex requests per day. With Nginx and that custom code combined into a single configuration, not requiring a separate backend service, we turned a complicated problem into a very simple one!
lovehashbrowns11 hours ago
I used OpenResty + Lua + Redis to implement a quick blacklist for an ad platform like 10 years ago. It really does make everything so simple and it's pretty fun to work with.
johnorourke9 hours ago
Brings back memories! I used it in 1996 to build an e-commerce site in Perl v4.
doublerabbit14 hours ago
eBay confused the 17 year old me back in 2007 when their listings were powered by a dll file.
cgi.ebay.co.uk/ws/eBayISAPI.dll
It wasn't for many years later that I discovered knowledge about CGI.toast011 hours ago
In the beginning, eBay used to run the frontends on Windows, with IIS. When they moved to something else, they kept the urls, because cool urls don't change.
finaard7 hours ago
Unfortunately most of the people out there don't care about that - majority of the links I've set on my site over the decades are dead nowadays. If I ever get bored I'll write a script that'll check if there's an archive copy, and link to that.
I also went through quite a bit of effort to make sure _my_ links don't break. There are a handful of tools (like a dig wrapper for DNS lookups) which still can be reached as /cgi-bin/dig.cgi, even though they haven't been a standalone CGI script for two decades now. It's still technically CGI (just running as fastcgi nowadays), and the code base is just as good as you'd expect from what started as an experiment to see how much of a text markup parser I can write using perl regexp only.
treyd13 hours ago
I never did understand why this path/file structure was exposed.
immibis13 hours ago
Same reason https://foo.example/bar/baz.html exposes it - it tells the web server which file to access. Cool, customized routing wasn't always a thing.
The eBay example, by the way, is ISAPI, not CGI.
anon636210 hours ago
IIS, Apache HTTPd, and Nginx have supported rewrite rules with wildcards and regex since forever.
Thus, there's no absolute rule that serving a static state must faithfully map to filesystem representation except convenience. Nor, do dynamic requests need to map to include the details of dynamic handler URIs unless the application cannot change generated links.
Revealing backend state, while somewhat Security Through Obscurity (STO)(TM), it's unwise to volunteer extraneous information without a purpose. Preferably, some other simple, one-way hash external representation should be used.
I played client-side Netscape JS and Apache HTTPd CGI bash shell scripts (not even Perl) to write a toy multiuser chat app in 1996. IIRC, it used a primitive form of long polling where it kept an HTTP/0.9 session open with keepalive commands periodically and then broadcasted the message received to all other users who were also connected.
immibis6 hours ago
It's always been technically possible to make URLs look how you want but it wasn't always in vogue.
mpyne21 minutes ago
And in eBay's specific case they may have opted against due to the performance challenges URL rewriting may have involved.
aeyes12 hours ago
eBay architecture slides from 2006: https://www.cs.cornell.edu/courses/cs330/2007fa/slides/eBayS...
3.3M LoC C++, that must have been quite painful.
toast011 hours ago
They replaced it with Java, which was probably worse. :p
nine_k7 hours ago
No. At least, not nearly as many footguns. Also, compiler error messages that actually can be deciphered.
whizzter2 hours ago
C++ compiler errors back in C++99/01 days weren't usually so bad unless you were doing basic code and not some SFINAE shit. Increased boost usage and C++11 introduced var-arg templates and a lot of other things that made things explode.
Today we've removed a lot of SFINAE magic since if-constexpr that is easier to read/trace and also the existence of CONCEPTS to pre-constrain to compatible types and thus avoiding implementation details from becoming the error.
pjmlp2 hours ago
True, on the other hand writing portable code was a mess, even if we constrained ourselves to UNIX world, as each vendor compiler supported a different flavour of the standard.
I had some fun between HP-UX aC, AIX xlC, Solaris cc, and Red-Hat gcc, and naturally there was MSVC as well.
We had #ifdefs in place for prototypes, because the version of the aC compiler we had on our development server still only supported K&R C function definitions.
pjmlp2 hours ago
Not really, there is a reason why the Java pivot into the server during the early's 2000's, it wasn't only Sun, IBM, Oracle, Bea marketing, it was definitly much better experience than mod_tcl or mod_perl without JTI[0], CGI/ISAPI in C or C++.
Likewise with ASP.NET on Windows land, as ASP with VB, and C++ alongside COM wasn't that great either.
[0] - By the time this started to matter Java 1.3 was already the common version.
17186274405 hours ago
.html is less problematic than .dll, since the served file IS html, regardless of how you generate it. In an ideal world you could just fetch baz.json and get to the intended API.
qwertox11 hours ago
That's not really an explanation. Could be named https://foo.example/bar/baz as well.
I also used to ask myself why they would expose the filename of the DLL.
plorkyeran10 hours ago
At the time that would have required that they write a custom proxy that sat in front of IIS and everyone would have been very confused about why you even wanted to do that. IIS (and every other web server in 1996) just took the URL, converted it to a path, and ran that, with no transformations applied. This was three years into the web and many things that are simple and obvious now were not back then.
flomo10 hours ago
Yeah, IIS didn't have any sort .htaccess type thing for url routing IIRC. Even later on, we had to dig under the hood of asp.net because we didn't want .aspx in our paths.
pjmlp2 hours ago
It definitly could have it, provided someone wrote an ISAPI extension/filter combo for it.
I got to know, because I wrote a complete proxy on top of ISAPI, with callback handlers that could be written in C or Tcl.
On Windows NT and 2000, for our application server based on top of IIS (we had a version on top of Apache as well).
rescbr3 hours ago
I believe it was Windows Server 2008 that brought URL rewriting support to IIS.
But by then ebay wouldn’t be using ISAPI :)
ngxsux13 hours ago
[flagged]
raincom13 hours ago
Just curious, who did they steal from? Or which code base it is stolen from?
RiverCrochet13 hours ago
It may be a reference to this: https://arstechnica.com/information-technology/2024/02/nginx...
jesprenjop5 hours ago
Presumably by F5 from original Russian developers. But they forked it into freenginx and angie and this module specifically mentions angie. So they "stole" it back and continue developing it.
Edit: there's already a sibling post with a link with way more info