I've wondered for a long time if we would have been able to make do without protected mode (or hardware protection in general) if user code was verified/compiled at load, e.g. the way the JVM or .NET do it...Could the shift on transistor budget have been used to offset any performance losses?
Managed code, the properties of their C# derived programming language, static analysis and verification were used rather than hardware exception handling.
I think the interesting thing about having protection in software is you can do things differently, and possibly better. Computers of yesteryear had protection at the individual object level (eg https://en.wikipedia.org/wiki/Burroughs_Large_Systems). This was too expensive to do in 1970s hardware and so performance sucked. Maybe it could be done in software better with more modern optimizing compilers and perhaps a few bits of hardware acceleration here and there? There's definitely an interesting research project to be done.
ah, PDE/PTE A/D writes... what a source of variety over the decades!
some chips set them step by step, as shown in the article
others only set them at them very end, together
and then there are chips which follow the read-modify-write op with another read, to check if the RMW succeeded... which promptly causes them to hang hard when the page tables live in read-only memory i.e. ROM... fun fun fun!
as for segmentation fun... think about CS always being writeable in real mode... even though the access rights only have a R but no W bit for it...
It used segmented 32-bit mode. Flat mode doesn’t support virtual addressing which was accomplished with the descriptor tables (and the ES register) if I recall correctly. lol it’s been 33 years since I wrote windows drivers. Had to use masm to compile the 16-bit segments to thunk to the kernel
> These features made possible Windows 3.0, OS/2, and early Linux.
And also--before Linux--SCO Xenix and then SCO Unix. It was finally possible to run a real Unix on a desktop or home PC. A real game changer. I paid big $$$ (for me at the time) to get SCO Xenix for my 386 so I could have my own Unix system.
https://en.wikipedia.org/wiki/Singularity_(operating_system)
Managed code, the properties of their C# derived programming language, static analysis and verification were used rather than hardware exception handling.
I think hardware protection is usually easier to sell but it isn't when it is slower or more expensive than the alternative.
some chips set them step by step, as shown in the article
others only set them at them very end, together
and then there are chips which follow the read-modify-write op with another read, to check if the RMW succeeded... which promptly causes them to hang hard when the page tables live in read-only memory i.e. ROM... fun fun fun!
as for segmentation fun... think about CS always being writeable in real mode... even though the access rights only have a R but no W bit for it...
However, Win32s was introduced in 3.11 which a subset of the Windows 32-bit API from NT.
3.11 also introduced 32-bit disk access and 32-bit drivers.
Microsoft did 32-bit in steps -- it was confusing already back then.
And also--before Linux--SCO Xenix and then SCO Unix. It was finally possible to run a real Unix on a desktop or home PC. A real game changer. I paid big $$$ (for me at the time) to get SCO Xenix for my 386 so I could have my own Unix system.