hckrnws
What's the state of 'traditional' turnkey time sharing for coding + development in 2023? Are always-on, signon-and-go servers or clusters of servers with shared storage and a wealth of pre-installed software used by a community of scores or hundreds or thousands of developers still a common thing? Or has that pretty much gone by the wayside and most development is carried out in more isolated, individualized, single-user oriented environments - be it laptop/desktop or Cloud-based (containers, VMs, EC2, etc.) that are customized/tailored to single individual's needs?
Supercomputing environments I've encountered are still timesharing-based, because by definition they are too expensive to be single-user. My old university still operates a cluster-based system that you can submit jobs to, with "head nodes" that have ssh access for light work/setup (they aren't suited for heavy jobs). It's operated by the CS dept but available for others to use, the sciences in particular.
From what I've seen at FAANG companies, GPU clusters for ML researchers are similar.
Basically a bunch of beefy nodes with 96 CPUs and 8 Nvidia A100 cards each configured in a SLURM cluster. Users get SSH access and can run tmux, jupyter, or whatever else is needed for their workflow.
I'm not sure if this is what you're getting at, but the tilde[0] and SDF[1] communities are going pretty strong still. Both of them amount to getting a shell account on a shared box with a bunch of pre-installed/hosted utilities. Much more hobbyist-centric than professional though.
[0]: https://tildeverse.org [1]: https://sdf.org
I don't know about "traditional" systems, but I'm trying to create a modern time-share system for building web apps.
There's a work in progress at: https://gridwhale.com. (Warning: Still primitive and lots of bugs).
My manifesto is here: https://medium.com/@gridwhale/rise-of-the-hyperplatforms-d4a...
What a coincidence, - I was reading about this last night on a great website:
link is not working (Forbidden)
the charter sought was not merely to write some (unspecified) operating system, but instead to create a system specifically designed for editing and formatting text, what might today be called a ‘word-processing system.’
That's an interesting perspective. Was the shell intended to be a glorified word processor, with tools like sort, uniq, wc? In a documentary from the early 1980s, Kernighan also demonstrates this as a "killer" feature of Unix:
Here's Ken Thompson talking about it (the word processing angle) quite entertainingly: https://youtu.be/EY6q5dv_B-o?t=1494
IIRC it was initially employed for their payroll system to help print checks.
First commercial use in Bell Labs is usually credited as the secretarial pool for writing patent applications.
I always give an ironic chuckle when fighting the print system on a unix machine(lpd, lp, cups...) the internal rant that goes with it is something like.
"Printing was THE original purpose, the reason for existence, of unix, you would think it would be a solved problem instead of the terrible mismatch of sins that it is."
printing on unix in my experience is quite painless compared to windows for example, in my experience
it just works :p
And in particular troff, which played a role similar to the one LaTeX plays today.
I think LaTeX is less relevant today than it used to be like a decade ago.
Outside academia, most people will rather use pandoc, asciiDoc or some Markdown dialect instead.
LaTeX is good for typesetting, AsciiDoc/Markdown/RST is good for maintaining basically-readable-as-text documentation that can also render to various presentation formats.
In many cases, if those output options include typeset documents, the toolchain that gets there uses LaTeX, though the most common output format is usually HTML.
OK, let's take this in some more context:
To the Labs computing community as a whole, the problem was the increasing obviousness of the failure of Multics to deliver promptly any sort of usable system, let alone the panacea envisioned earlier.
What was Multics? One core animating idea of Multics was the "Computer Utility" concept, or centralized mainframe-based computing as a utility as reliable and available as water and power:
https://multicians.org/fjcc1.html
One of the overall design goals is to create a computing system which is capable of meeting almost all of the present and near-future requirements of a large computer utility. Such systems must run continuously and reliably 7 days a week, 24 hours a day in a way similar to telephone or power systems, and must be capable of meeting wide service demands: from multiple man-machine interaction to the sequential processing of absentee-user jobs; from the use of the system with dedicated languages and subsystems to the programming of the system itself; and from centralized bulk card, tape, and printer facilities to remotely located terminals. Such information processing and communication systems are believed to be essential for the future growth of computer use in business, in industry, in government and in scientific laboratories as well as stimulating applications which would be otherwise undone.
-- "Introduction and Overview of the Multics System", F. J. Corbató (MIT) and V. A. Vyssotsky (Bell Labs)
This was an ambitious project and it took some time to achieve. Work on Multics started in 1965 and Multics was first used for paying customers in October of 1969, six months after Bell Labs dropped out and work on Unix began.
https://multicians.org/chrono.html
So did Multics deliver "promptly"? Probably not, but four years for an ambitious project isn't so bad, and it took Unix six years (until 1975) to get to Sixth Edition, the first version of Research Unix used much outside of Bell Labs.
https://en.wikipedia.org/wiki/Version_6_Unix
My point is that Multics is given short shrift by this history, which is understandable because it's about a completely different OS, but it's painted as a failure that was stalled in development for an inordinate amount of time. It wasn't a failure, as it was used commercially until 2000, and whether it was inordinately slow in coming is impossible to fairly judge because it was, in most respects, the first of its kind, in that it was the first "complete" OS with what we'd now consider the full complement of functionality.
Multics influenced a number of minicomputer OSes of the 1970s. I never used PRIMEOS, but have heard it described as a smaller version of Multics. I did use Data General's AOS/VS, and that had been influenced by Multics--so I suppose that its predecessor AOS also had been.
I never used PRIMEOS, but have heard it described as a smaller version of Multics.
I've never heard PR1MEOS referred to that way, but it makes a lot of sense. It was a lot easier to wrap one's brain around than the competition.
UNIX is one of whatever MULTICS was many of.
Wikipedia says that Multics had "about 80 installations". That's probably a big part of why Multics is given short shrift - it was successfully built, but (approximately) nobody cared.
Wikipedia also notes that Unix ran on less expensive hardware than Multics. That mattered - there were many more systems that Unix could be put on.
Wikipedia says that Multics had "about 80 installations".
It was a mainframe operating system that only ran on high-end hardware. What do you expect?
Wikipedia also notes that Unix ran on less expensive hardware than Multics.
Yes. It was also less capable. Both of those things were design goals of Unix and not of Multics.
It was a mainframe operating system that only ran on high-end hardware. What do you expect?
I expect that by 1975, say, the number of high-end hardware installations was a lot more than 80. Multics, even when completed, didn't make much of a dent in the world. The ideas were influential, used in a number of later OSes, but Multics itself didn't go very far.
Yes. It was also less capable. Both of those things were design goals of Unix and not of Multics.
Yes. Multics had the wrong design goals. Arguably, it hit them, but that didn't matter, because they were the wrong goals (though not obviously wrong at the time). That's why Multics is given short shrift - it successfully implemented what turned out in retrospect to be the wrong thing.
Why were they the wrong goals? There were something like 600,000 PDP-11s sold. Sure, Unix didn't go on all of those, and Multics could have run on more than the 80 machines that it did. Still, that's an insanely large number of installations that they... not exactly ignored, because even the first PDP-11 hadn't shipped when the Multics project began, but at least a huge number of machines that they were unable to serve. And of course, as always happens in this business, the lower-end machines became more and more powerful. The result of that (plus easier portability) was that Unix ate the world, and Multics became a footnote.
I expect that by 1975, say, the number of high-end hardware installations was a lot more than 80.
Are you sure? Wikipedia notes that for the Cray-1, released in 1975, eventually "eighty Cray-1s were sold, making it one of the most successful supercomputers in history."
https://en.wikipedia.org/wiki/Cray-1
Yes, yes, there's a difference between high-end mainframes and supercomputers, but there's also a difference between mainframes and the PDP-11, which is better categorized as a minicomputer.
Personally, I find the dismissal of Multics to be less about reality and more about the perpetuation of pre-existing lore. The Xerox Star and Smalltalk also did not fare well in terms of commercial sales, but they tend to be lionized in computer history rather than treated as a joke.
I expect that by 1975, say, the number of high-end hardware installations was a lot more than 80.
Yes, part of the problem was that, for all of its goals of portability, Multics only ever ran on two kinds of computer, and depended on their hardware features quite intimately. Compared to Unix, it was not effectively portable.
Yes. Multics had the wrong design goals. Arguably, it hit them, but that didn't matter, because they were the wrong goals (though not obviously wrong at the time). That's why Multics is given short shrift - it successfully implemented what turned out in retrospect to be the wrong thing.
[snip]
And of course, as always happens in this business, the lower-end machines became more and more powerful. The result of that (plus easier portability) was that Unix ate the world, and Multics became a footnote.
Yes: The computing utility concept came to pass on a smaller scale than the Multics people anticipated, and so could get by with smaller, cheaper, and less reliable hardware and software, which eventually grew up to be more Multics-level in capacity and reliability without becoming as expensive.
My point with my original post was to push back on the notion that Multics was a complete failure, that it stagnated and died after Bell Labs left the project. It didn't, it produced an OS that found a tiny commercial niche, and it executed on its vision to the extent the world as a whole allowed it to. The fact the world moved in a different direction doesn't take away from the technical accomplishment of the Multics development team.
People keep forgetting that UNIX only became portable after the UNIX V5 rewrite, and thanks to having the source code freely available to anyone that asked for it, for a symbolic price.
Until UNIX V5 and the Lions commentary book, it was PDP-7/PDP-11 only.
Crafted by Rajat
Source Code