It seems like there are two camps, the small group of people who care about UEFI and everyone else who doesn’t really notice or care as long as their computer works. So let’s talk about what UEFI is, how it came to be, what it’s suitable for, and why you should (or shouldn’t) care.
What is UEFI?
UEFI stands for Unified Extensible Firmware Interface, a standard held by an organization known as the United EFI Forum. Intel came out with EFI (Extensible Firmware Interface) and later made the spec public as UEFI. As a spec, implementation details change between vendors and manufacturers, but the goal is to present an OS bootloader’s standard and understandable structure. This makes it much easier to write an OS as you no longer need to worry about all the messy business of actually starting the chipset.
Several IBVs (Independent Bios Vendors) offer their implementations of UEFI that OEMs who produce motherboards can license and use in their products. Some examples would be AMI, Phoenix, and InSyde. You’ve likely seen their logo or just the text of their name briefly flash on the screen before your OS of choice properly boots.
Let’s talk about how UEFI boots. Generally, there are a few different phases. We generally say since there are many implementations and many of them do things out of spec. There are three general phases: Security (SEC), Pre-EFI Initialization (PEI), and Drive Execution Environment (DXE). Each is a mini operating system. Because Intel is the one who started EFI and later turned it into UEFI, much of the design is built around how Intel processors boot up. Other platforms like ARM might not do much in the SEC or PEI phase.
The boot process for X86 processors is a bit strange. They start in real mode (though most processors these days are technically unreal), with a 20-bit address space (1MB of addressable memory) for backward compatibility reasons. As the processor continues to boot, it switches to protected mode and then finally to long mode. In a multi-core system, all the processors race to get a semaphore or read EAX, and one is designated the BSP (bootstrap processor). The losers all halt until the BSP starts them via an IPI (inter-processor interrupt). Ordinarily, there is an onboard SPI flash chip with firmware mapped into the end of the physical 32-bit region of memory. The Intel Management Engine (ME) or AMD Platform Security Processor (PSP) does most of the SEC phase, such as flushing the cache and starting the processors.
Once the processors are started, PEI has officially begun. On Intel systems, there is no system RAM in most of PEI. This is because memory needs to be trained and links initialized before the processor can use them. The ever relentless push for more and more speed from RAM means that the RAM needs to be tested, calibrated, and configured on every boot as different RAM sticks have other parameters. Many systems cache these parameters for faster boot times, but they often need to be invalidated and retrained as the RAM sticks age. The PSP handles memory training and loading UEFI on some AMD systems before the main x86 processor is pulled out of reset. For Intel systems, they use a trick called XIP (execute in place) which turns the various caches into temporary RAM. There is only a small stack, a tiny amount of heap space, and no static variables for PEI. Many Intel server platforms rely on the Board Management Controller (BMC) to train memory, as training large amounts of memory takes a very long time.
After initializing RAM and transferring the contents of the temporary cache, we move to DXE. The DXE phase provides two types of services: boot and runtime. Runtime services are meant to be consumed by an OS, services such as non-volatile variables. Boot services are destroyed once ExitBootServices is called (typically by the OS loader), but they are services like keyboard input and graphical drivers. BDS (boot device selection) runs in DXE and is how the system determines what drive to boot (hard drive, USB, etc.).
This has been a very dense and x86 specific overview. Many architectures such as ARM eschew UEFI for something more like coreboot, linuxboot, or LK, where it boots a small Linux kernel that then kexec’s into a much larger kernel. However, many ARM platforms can also leverage UEFI. Only time will tell which way the industry moves.
How It Came To Be
In 2005, UEFI entirely replaced EFI (Extensible Firmware Interface), the standard Intel had put forth a few years prior. EFI borrowed many things from Windows of that period, PECOFF image formats, and UEFI, in turn, borrowed practices from EFI. Before EFI, there was good old BIOS (Basic Input Output System). The name originated from CP/M systems of 1975. In that period, the BIOS was a way for the system to boot and provide a somewhat uniform interface for applications by providing BIOS interrupt calls. The calls allowed a program to access the input and outputs such as the serial ports, the RTC, and the PCI bus. Phoenix and others reverse-engineered the proprietary interface that IBM created to manufacture IBM compatible machines, which eventually led to something close to a standard.
Is It Better Than BIOS?
Yes and no, depending on your perspective. Many OS vendors like UEFI because they generally make their lives easier as the services provided make it easy to give a homogenous experience booting. The Linux community, generally speaking, is agnostic at best and antagonistic at worst towards UEFI. The BIOS interface is pushing 45 years as of the time of writing and is considered legacy in every sense. Another point in UEFI’s corner is that it facilitates selecting different boot devices and updating the firmware on your machine. UEFI uses GUID Partition Table (GPT) over Master Boot Record (MBR) — considerd a plus as MBR is somewhat inflexible. Many platforms shipped today are based on the open-source EDK2 project from TianoCore, an implementation of UEFI that supports X86, ARM, and RISCV.
The biggest complaint with UEFI is that it is a closed black box with unimaginable access to your computer and stays resident after the computer boots. BIOS is appealing since the interface is well-known and generally is non-resident. UEFI can be updated easier but also has a much more vital need for updates. A UEFI update can brick your system entirely. It will not boot, and due to the fuses being blown on the unit, it is almost physically impossible to fix it, even for the manufacturer. Significant amounts of testing go into these updates, but most are hesitant to push many updates because of the amount of work required.
Why You Should or Shouldn’t Care
At the end of the day, you care if you can use your computer for the things that are important to you. Whether that’s playing a game, writing an email, or making a new computer, it doesn’t matter as long as the computer does what you want. And booting is just one oft-forgotten step in making that happen. If you care about knowing every single piece of code your machine runs, you need to buckle in for a long ride. There are companies such as Librem going to long lengths to make sure that tricky problems like memory init are running in non-proprietary blobs. You can still tweak UEFI, [Hales] being a great example of tweaking the BIOS of an old school laptop. Open-source tools for inspecting and understanding what’s going on under the hood are getting better.
Ultimately it is up to you whether you care about the boot process of your device.
Spyware.
Worse, spyware with remote access.
AKA a backdoor.
The fact that military and secret service agencies disable it says everything.
The fact that they do, means it can be done and by extension, if the method can be reproduced and rolled into a patch, so can the average user.
The fact just mean they are not FIPS certified so should be disabled…
UEFI is a tiny special purpose operating system. There I said it.
It can access file systems. It can load programs/modules. It has an interface for modules to access these file systems. It can optionally set up protected mode for a multi stage boot strap.
Developers spent many hours developing this bit of complex software. Yet your computer only lands there briefly before moving onto the real goal of running a conventional operating system. UEFI is just a Rube Goldberg intermediate step.
I boot over SPI. On one chip its done in hardware. on another I use it’s a mask ROM on the CPU. (and also supports USB boot .. which had some bugs)
Do like POWER9 and a bunch of the coreboot guys: strip UEFI out and replace it with linux, a la petiteboot. Have it come up off the bios chip, scan drives, and chainload the next OS.
I upstreamed some of my company’s patches to coreboot. It’s an interesting idea. But I’m more of a uboot and redboot kind of guy. I’ve met the Apple openfirmware team a few times at SVFIG. OF would probably be my dream job over UEFI or coreboot.
U-Boot implements the UEFI standard these days which is huge for the Linux distros because it makes it possible for them to support a bunch of embedded platforms without having to build per-platform images.
It’s one of the many things U-Boot can do. We go the DeviceTree route for booting because that is de facto standard for ARM Linux. On ARM platforms, UEFI is more of a concession for the vendors that would rather do things the hard way. ;)
P.S. devicetree’s structure comes from OpenFirmware. even if Forth is dead the legacy continues.
@jonmayo: On U-Boot these days UEFI is the easy way! Much better than the old method of custom U-Boot scripts. For embedded Arm Linux the direction things are going is UEFI+Devicetree. UEFI gives the ABI for finding and running the OS loader, and Devicetree describes the platform. The EBBR standard[1] covers how it all fits together, and Arm’s SystemReady program[2] tests devices for compliance with EBBR.
[1] https://github.com/arm-software/ebbr
[2] https://developer.arm.com/architectures/system-architectures/arm-systemready/ir
Full disclosure: I’m the EBBR maintainer, Arm’s SystemReady IR architect, and in years past I was the Linux Devicetree maintainer
+1 I wish the article just mentioned OpenFirmware which was on PowerPC Macs, and programmable in Forth. I always wanted to learn Forth and play around with it, but that seemed pointless when Apple went to x86-64 CPUs. Booting Target Disk Mode was a treat for troubleshooting one non-booting Mac with another or even booting into OF with an external Firewire CD/DVD drive was a treat compared to troubleshooting non-booting Windoze machines. It may be obvious but I wouldn’t know what to ask the Apple team, I was just lucky to get Yellow Dog Linux running back then. Frankly I’d be happy if the industry sticks with UEFI as long as they had BIOS. The devil you know…
Mecrisp-Stellaris Forth on an STM32 can get you hacking on Forth quickly. (on hardware less than the price of a cup of coffee).
For a modern take on the old language there is Factor. Which is a nice x86/x64 interactive interpreter/compiler for a Forth-like language.
Article correction: the term BIOS (Basic Input Output System) did not originate with “IBM CP/M”. It originated with “Digital Research CP/M”, which Seattle Computer Products cloned for the 8086 as QDOS (aka 86-DOS), and later licenced to Microsoft to relicence to IBM as “IBM PC/DOS”
You are mistaking DOS for BIOS. BIOS happens BEFORE DOS. Check out Dr. Dave Bradley on the origin of BIOS.
Lew’s right. CP/M kept the I/O separate, so it was easy to move CP/M to different hardware. For a lot of computers, the BIOS was on a floppy disk, not rom. So anytime you changed video boards or went from serial terminal to video board, the user could fix the BIOS.
CP/M’s key innovation was the separation of a bunch of hardware routines from the actual OS runtime (in the case of CP/M, the shell).
The IBM PC’s BIOS took this a step further by putting those routines in a ROM chip on the motherboard. Otherwise, they’re basically identical. DOS’ shell calls into the BIOS routines every time it wants to print characters to the screen, get input, or access a drive, same as CP/M.
I still have my Digital Research CP/M 2.2 manuals handy, so let’s look…
In the “CP/M 2.2 Alteration Guide” ((c) 1979, Digital Research), the third paragraph of the Introduction says that “CP/M is separated into three distinct modules”, and goes on to list them as “BIOS (basic IO system which is environment dependant), BDOS (basic disk operating system which is not dependant on the hardware configuration), CCP (the console command processor which uses the BDOS)”. Chapter 6 details the “BIOS ENTRY POINTS” and Appendix B provides a listing of “THE MDS BASIC I/O SYSTEM (BIOS)”
Having written both a boot loader /and/ a BIOS for CP/M 2.2 on my Cromemco Z2 system, I can assure you that CP/M required a BIOS.
As for Dr. Bradley, he seems to have been critical to the development of IBM’s BIOS for their 8088 and 8086 computer offerings, many years after Gary Kildall wrote the first BIOS for his fledgling CP/M operating system.
I brought up Calif Computer Systems Z80 on CP/M. CP/M was supplied with BIOS in 8080 Assembler. You added code for your floppy controller and disk drives for Basic H/W operations like number of tracks, sectors per track, sector size, seek command, read command, etc. Those were the days. It’s been a long time. I wish I still had my CP/M manual and BIOS code.
UIEFI is chaos, not a BIOS. Many dark spirits can hide in there – some listen to communism orders, some are ready for anarchism.
Sorry, I stay with ancient Sun Ultrasparc. It has FORTH and that’s not UEFI.
if that is where you stay, who am i to try to change your mind. may the FORTH be with you no matter what the calendar says.
Have you met our lord and savior, linuxboot/petiteboot?
Available by default on a cousin POWER9 near you.
UEFI is just an ABI, and quite a sane one at that (with the maddening exception of using UTF-16). Most of the complaints about UEFI I hear tend to be about the implementation on various machines. U-Boot’s UEFI implementation on the other hand is quite tidy and sane.
I’ve never used it.
But my recollection was there were problems, or at least perceived problems, trying to use Linux on UEFI machines.
And that seems to have faded, so I assume things have been fixed somewhere.
That’s ancient history. It works much better now. If you go through the EDK2 commit logs you’ll see plenty of Linux people.
UEFI is hugely important to infrastructure. Without it you can’t boot from volumes larger than 2TB.
It also provides a native shell rather than needing to boot into FreeDOS or an equivalent to do firmware updates.
It wasn’t made for home users, you can tell because most of the functionality is locked down or disabled on consumer motherboards compared to enterprise counterparts. But because the architecture is shared between home and enterprise, home users get the upgrade to UEFI whether they need it or not.
You could have a less awful boot system with large storage volumes (yes I really don’t like it, as its always been more trouble than traditional bios – as in I’ve never had any issues with traditional bios (beyond buggering something up myself) and more than a few fights with the EFI shell and UEFI stuff in general – though it does have its positives I suppose)…
That traditional old BIOS didn’t isn’t a surprise, as it was never on the cards to even have 100 GB in a volume when BIOS was created… A modernised version could be setup with such things. Same reason old computer OS could only address 4GB RAM etc – its not worth building it with such capacity for more when the hardware doesn’t exist, and won’t for ages…
What were your issues with UEFI? I have used it for some time already and I much prefer it compost to BIOS.
I like that it boots files off a standard partition instead of some static bytes at the start of a drive based on “magical” flags. I can change the files with any OS without special tools to copy this bytes or handle flags.
Many and numerous issues, all because its such a complex beast trying to be too clever for its own good, many of which just flat out stop booting to the Real OS… And many minor annoyances in how its often implemented, like the Motherboard keeping phantom installs in its storage so two disk dual booting can be very annoying to find the right disk
BIOS being so simple is nearly bulletproof as long as the bootloader it jumps to isn’t broken, and those bootloaders being simple are trivial to fix (usually by just replacing it with the new fresh working version).
But to be clear I’ve been using it for ages too, and it generally works well enough – its just irritating when it goes wonky, and the way it works doesn’t fill me with any joy, its like the Intel ME type bollocks nasty pervasive stuff that nobody (near enough) actually needs…
What I understand is that you can’t boot from *partitions that start beyond 2TB*.
There’s nothing to stop someone telling bios to look at a higher memory address. It’s just ones and zeros.
These limits are usually related to integer wjdth. 2Tb sounds like 32bit signed integer (or unsigned with a flag), for example.
Except for when that address doesn’t fit in the integer it uses.
There are severe limitations to what you can boot with a legacy PC BIOS; but it’s not really fair to say that doing slightly less archaic things without UEFI is impossible; or even that UEFI can do it, strictly speaking.
I don’t doubt that there are ways to sneak around this; but UEFI systems typically rely on components stored on the EFI System Partition; a deliberately small partition with a simple filesystem(pretty much FAT); and at that point you basically have the same requirements as you would if you were chainloading with a bootloader stored on a partition small enough for the BIOS to understand and letting your bootloader handle the volumes and hardware that would upset the BIOS.
It’s certainly true that the legacy BIOS is deeply inflexible(except in the dangerous sense that option ROMs can do more or less whatever they like); but UEFI takes that fact and says “so we’ll build an entire operating system into the firmware!”; rather than “So we should make it easy to boot a second stage bootloader and get out of the way as fast as possible”
There also were Open Firmware and the normal EFI.
Apple used them on Power PC Macintoshs (new world Power Macs) and Intel Macs (Mac Pros etc), respectively.
The latter could run a BIOS emulator, even, that was included in the Boot Camp software. It was good enough to boot up Windows XP and higher.
That being said, you’re not wrong about the limitations.
However, the boundaries had been pushed many times in the past, already.
Originally, there was int13h and CHS drive geometry, the 1024 cylinder limit (0-1023), etc.
Then came extended int13, E-CHS, LARGE, LBA, LBA48 and so on.
On top of that, early EIDE drives did perform address translation transparently on its own,
to compensate for the combined limitations of the IDE specs, BIOS specs etc.
Making a BIOS specification with support for an expanded MBR or a hybrid MBR-GPT format would somehow be possible, I suppose. Very hacky, but doable. Let’s just think of that LBA sector-shifting hack that large HDDs with internal 4k sectors hat to compensate for Windows XP’s misaligned NTFS partition.
Essentially, it’s mainly necessary that the BIOS-based OS can boot from its boot partition, but also sees the whole drive capacity. The rest of that capacity could be accessed as another drive letter, at least.
A sad but true UEFI lock-in story…
Model Name: ASUS VivoBook Flip 12
P/N: TP202NA-OB04T
S/N: J7N0GR01E971274
MFD: 2018-07 CN: BY9S
Made in China/15105-04971000
Purchase Location: Office Depot, Pembroke Pines, FL
Purchase Date: 21-Sep-2018, 20:01PM EDT
Receipt No: 22VTP94P3556YB48F
Price: $199.99
Plus Sales Tax 6%: $211.99
After checking with both the manufacturer (ASUS) and the reseller (Office Depot) during pre-purchase, both told me this little laptop came with Windows-10S but could easily be converted to Windows-10 (this was true). Or if I wanted I could install a mainstream Linux distribution (Debian-based was specifically mentioned) provided the distro supported UEFI non-legacy boot (Linux Mint met these requirements). Dual boot was not guaranteed, but I did not care about that at all. I just needed to put Linux on the machine.
Six weeks later and dozens of Emails with ASUS Customer Service elevated to the highest level, ASUS finally admitted that UEFI was limited by an agreement between Microsoft and ASUS so the the laptop would only ever boot to a Microsoft OS. Anything else was out of the question. There was nothing in any of the laptop documentation that described this limitation.
I tried to get my money back from Office Depot first, then ASUS. Both refused saying I had the laptop too long by then (two months of fighting with ASUS). I gave up, reinstalled Windows from the Recovery USB, charged the battery to 50%, and physically disconnected the battery internally to prevent discharge. I then stored the laptop thinking I might eventually sell or donate it. Later on SARS-CoV-2 came around. I decided to dust the laptop off and donate it to a family with kids trying to distance learn.
The machine would not boot. The battery was still good, but the laptop would never charge it. I’ve seen this nonsense before with laptops that were unpowered for a long time and the on-board backup battery would discharge. That was not the case, I measured the on-board battery and it was good. I still had the Windows Recovery USB drive and could boot from it, but after a few minutes the laptop would turn itself off claiming the battery was bad. It wasn’t.
Again, ASUS could provide no solution on how to recover the laptop other than to send it in for an out-of-warranty repair which even without shipping would cost far more than I originally paid for the laptop – plus parts.
All of this was caused from the outset by a UEFI lock-down by Microsoft that ASUS agreed to, but never disclosed. I remember when UEFI first appeared everyone cried out it would be misused to lock your hardware to a specific brand and/or operating system. Here we are, years later and UEFI is ubiquitous with its main purpose to force product/OS lock-in.
i’m fairly confident you’re mistaken
I am too. I’ve never had a Linux live usb refuse to boot provided it was made right. All UEFI Ive seen have a legacy option although even if they didn’t most live usb images have efi support baked in. Hell, even Ubuntu supports Secure Boot natively and has signed kernels (ewww)
The Linux Live installer boots fine. The problem is somehow the UEFI BIOS (or something) prevents the Linux installer from writing to the soldered-down EMMC drive. Only a Windows recovery or new install media is allowed to write to the drive. I researched this online and all indications are it’s a real thing with no work arounds I could find back then. Eventually ASUS came clean and admitted it was locked to Microsoft and ASUS had to agree to it. I would try again now that time has past, but as I said earlier the laptop no longer works with the original battery, even though the battery still holds a charge.
Oh geez. Odds you probably could still use it but you’d need a bootloader that can boot from an emmc, the linux kernel should have everything else it needs. Yikes.
Yes, I had the same issue with an old Dell Inspirion 15. The ones with a weird 1 core celeron. I could boot off the usb, Do live linux. Install no issue, but good luck letting UEFI boot the damn thing on it’s own.
I think UEFI is cool. Also could be spyware like some people state. But my biggest beef with it is no real good consistent documentation.
Thanks for telling to mankind “Buy Linux supported hardware”
Have you seen the outrageous prices the likes of System76, Think Penguin, etc. are asking for “Linux Compatible” machines? A Clevo-made generic mediocre i5 laptop with Win10 that costs around $550 on the street will cost you $1,000 from those crooks.
Please compare “product you want” with “product you want”.
Be aware of how much of the $550 system goes to Microsoft and how much that enables Microsoft to surpress hardware vendors.
Complianing here, shouting here about crooks, will not make any difference.
Wallet voting will.
So buy something that isn’t from system76? Lots of fish in the sea.
Hell, Dell sells cheap desktops with Ubuntu preinstalled.
No one will admit it but secure boot with locked in digital signatures on OEM machines is most likely why Microsoft pushed UEFI so hard on the consumer space. I have run into only 1 of these in the wild and it was in the early days of consumer UEFI. It completely disabled the ability at the time to upgrade the machines OS.
I had a Compaq server with dual (slot style) Xeon CPUs that had a BIOS programmed to block Windows XP from using both CPUs. Any Windows XP Pro install says it’s for 1-2 CPU. I could install Windows 2000 Pro, Windows NT4, or any version of Windows Server released after NT4 and use both CPUs, or any Linux distro. It was XP and only XP that was blocked from using both. This was before Vista so no idea if that or Win 7 would have been able to use both CPUs.
Any attempt to force XP into using both CPUs resulted in a complete stop partway through booting.
So I put 2000 Pro on it and gave it away.
A funny oddity about this Compaq server was the instruction label for opening the cover was on the inside of the cover, along with all the other labels, including a label that instructed where to apply all the labels.
So to see the instructions for opening the cover, and other info one might need or want to know prior to opening the cover, the cover had to be removed.
I thought that was a limit Microsoft put on XP
Gregg said “Any Windows XP Pro install says it’s for 1-2 CPU.” and expressed being disappointed by Microsoft.
XP Pro support running on one or two CPUs, or a one or two core CPU. That Compaq server had something tricky which blocked XP, and only XP, from using both CPUs.
That sucks. I wonder why they didn’t just say so to begin with? Surely that one sale of a budget device isn’t worth alienating a customer over. Would’ve saved you from spending money creating e-waste. What would be the thing to watch out for in the future? Is the integrated storage drive a decent red flag?
Nice. Now we need another article in this series describing the commands of the UEFI shell, and what can be done there.
Also, a good primer on how secureboot works would be nice to..
That’s not a bad idea, though such a guide would be a moving target (though perhaps things have stabilized more in the last 5 years or so).
It can be complicated with customized versions of the shell (one company I worked for in the past added some vendor specific commands to the shell they shipped in their BIOS image).
But some of the core commands like dumping the device tree, examining handles, and even connecting drivers manually might be helpful to those who see the utility in the shell.
“Another point in UEFI’s corner is that it makes selecting different boot devices and updating the firmware on your machine.”
This sentence no verb.
“it makes selecting”. Makes is the verb. It’s missing an adjective.
D’oh. Yes, indeed. Stupid brain.
“Selecting” and “Updating” are both verbs.
I agree that this is not a proper sentence, but replacing “makes” with “allows” fixes it and is what I believe the writer intended to say.
The sentence has two verbs. “Is” is the verb (linking verb) of the independent clause. “that it makes …” is a dependent (noun) clause that serves as predicate nominative, i.e., the dependent clause renames or further describes the subject (which is “point”). “Makes” is the verb of the dependent clause. “Selecting” and “updating” are the gerunds (nouns) of gerund phrases that are the direct objects of the dependent clause.
As suggested by a previous commenter, the sentence can be fixed by changing “makes” to “allows.” This changes the verb of the dependent clause, but the number of verbs in the sentence remains the same.
verblüfft
oder ist es
umgehauen
very informative.
thank you (gracias)
would you also conclude sentence structure(s) reveal the native language of a writer/speaker?
what about individuals with multi native languages?
so many questions, to “google” I go.
linguistic hacking?
My Asus Crosshair VIII Hero has had quite a few UEFI updates…….21 offered……
Some fun info at the link from that era when EFI was on Itanium machines but before UEFI was a thing for everyone else.
I remember playing with LinuxBIOS or something similar, it may even have been on a SIS630 as mentioned in the LinuxJournal article. Whatever motherboard it was, it booted FAST since all it had to do is the minimum needed to hand everything off to Linux. When UEFI was announced I thought it was going to be equally simple, but alas…
https://www.linuxjournal.com/article/4888
I have a few old machines which run win95 and have stocked up on number 2 pencils and paper pads. Perhaps our overlords will understand this after the robot apocalypse but I surely do not. I don’t want to torment my remaining brain cells with this stuff.
“… as training large amounts of memory takes a very long time.”
You aren’t kidding. When the first of the giant machines with terabytes of *memory* arrived in our lab, we were pretty disturbed to see that they will often just sit there for 15 minutes or more after power-up. The part that starts up the video output, blinks the leds and beeps the speaker are WAY after the computer has spent a TON of time testing memory.
>This is because memory needs to be trained and links initialized before the processor can use them.
The training (calibration) is to optimize the interface between DRAM and memory controller timing/track skew/eye opening for DDR4. They are pushing the memory bus speed so far that they have to squeeze last bits. Kinda like modem training in a sense.
For lower speeds (i.e. JEDEC speed), sometimes the training could be bypassed.
> as training large amounts of memory takes a very long time.
Telling me that you have no idea what you are talking about without telling me. The size of memory has nothing to do with amount of time needed for training. Might want to educate yourself what training means https://www.systemverilog.io/ddr4-initialization-and-calibration.
The build-in self test (BIST) for testing memory that is dependent on size of memory.
I don’t claim to be an expert but “large amounts of memory” doesn’t necessarily mean large capacity. Based on the article you linked it looks like training takes place on each data line between cpu and memory so a server with large amounts of memory meaning 8 or more DIMMs would have more lines to train and could take longer. Unless it happens in parallel. Again I don’t claim to be an expert.
I just wished CSM was not considered to be removed.
Having alternatives at hand is vital in our digital society.
Also, why the fuss about BIOSes being hard to implement?
There’s a whole BIOS industry. Companies like AMI sell a whole line of BIOS products that come with a suite of tools to customize these BIOS modules according to the hardware maker’s needs.
In addition, there are open implementations like SeaBIOS.
They contain all the basic features needed to get popular OSes run.
Why can’t they be included in UEFI as a cheap, generic CSM payload?
CSM is a huge security hole due to things like PCI option ROMs.
While there are other motivations for moving away from it, that’s the primary one.
I think Option ROMs are great, though. They allow for adding functionality on the BIOS level. Things like the RAID controller cards BIOSes or XTIDE Universal BIOS come as Option ROMs. Same goes for VGA BIOS or VESA VBE.. Or just think of the DOCs (Disk-On-Chip 2000). They are bootable because of this. Some people even installed DOS and Linux in ROMs, too.
In my opinion, it’s a bit hypocrite to speak about security/safety when it comes to UEFI. UEFI allows everyone/everything to take over full control – except its legitimate user sitting in front of the PC.
The whole certificate sheme is nonse. Once one of the master certificates has leaked, that security feature goes “poof”. Even more, the fact that Microsoft/Intel would like to get rid of the the ability to disable Secure Boot at some point is an affront to every mature user. Imho.
UEFI is the biggest FAIL in computer history!
Everyone please disable it and use BIOS!
Disable UEFI? I don’t think that’s an option in almost every case. Sometimes your firmware will offer you a BIOS compatibility mode where your UEFI system pretends to be a BIOS, but that doesn’t solve most of the issues brought up in this article – resident code on the CPU, increased complexity, etc. I mean technically I suppose that counts as disabling UEFI since the “extensible firmware interface” can’t interface with your OS anymore, but you get my point.
In fact, I would argue that BIOS compatibility mode disables one of the big strengths of UEFI – the sensible layout that it expects the disk to be in. No more magic data written at the start of the disk – plop your bootloader in a FAT partition and you’re good to go!
It’s also nice to have universal tools that can modify your boot order and such from right within your os, and to be able to boot from GPT media. A lot of the other advantages of UEFI are unfortunately only accessible from enterprise equipment, though.
“agnostic at best and antagonistic at worst”
Blimey, you weren’t kidding were you? As the above comments prove.
Also, you’ve missed the main advantage of UEFI in my mind, much quicker boot times.
Be careful, sarcasm and irony don’t work well over the interwebs. Some people may think you’re serious. ;)
Who cares if it boots quicker when these days the majority that (foolishly) use Windows never actually get their machine properly turned off anyway, so boot will always be quick…
And a less bloated OS boots damn fast anyway, plus having the splash screen hang around long enough for the slow wakeup time of the modern monitor so you can actually know which key to get at the BIOS setting and boot selection type stuff when you need it…
Yep, and few with experience doing things like writing an MBR based bootloader or in the PC firmware development space.
I’d suggest moving the info about being a PC specific discussion earlier in the article.
One thing to keep in mind is that the UEFI spec is primarily an interface specification.
The platform initialization spec is about an architecture that is tied to UEFI and enables a silicon vender like Intelk to supply more reference code rather than relying on IBVs to author the same code from voluminous documents containing the details of what must be written where and happen at what time.
Beyond that, a few more comments:
“The Intel Management Engine (ME) or AMD Platform Security Processor (PSP) does most of the SEC phase, such as flushing the cache and starting the processors.”
I’d like to see the source where this info comes from because when I worked in that space some time ago SEC started when the BSP was released from the reset vector. It sounds like there is some confusion with power sequencing which is the process of powering on the various pieces of hardware in the correct order. For example, the hardware to get data from the SPI bus and present it in the CPU’s address space needs to be operational before the CPU can come out of reset.
“Many Intel server platforms rely on the Board Management Controller (BMC) to train memory, as training large amounts of memory takes a very long time.”
Again, I’d like to see a source for this info. From at least Thurley to Purley, memory training has been done as part of the memory reference code (MRC) included in the Uncore which is a piece of code supplied by Intel. AGESA is the AMD equivalent IIRC.
Way back in Thurley on the Nahalem CPUs, Intel wanted dedicated silicon to do the DRAM training but instead ended up doing it in software (again, at least until Purley which is the last PC platform I worked on). Speaking of Purley, the memory training times on that platform were abysmal. It was really bad in the quad socket version of the platform (it seriously hampered development time). Intel tried a new trick here, they set up the MRC to be multithreaded and have a core on each socket do the work for the memory connected to the CPU in that socket. This was not a simple thing to achieve.
“However, many ARM platforms can also leverage UEFI. Only time will tell which way the industry moves.”
Whenever we see a new something that is supposed to displace the PC and it’s great because it “runs linux”, I like to ask if I can run a distro of my choosing the platform? Can I run Windows on the platform? What about QNX or Haiku? If a company did want to get their OS running on the platform, do they need hardware docs or can they get things working without those and rely on OEM supplied drivers for the hardware level support? Can the platform have multiple OSes co-exist and load them from arbitrary locations?
The market will move to whatever lowers the costs for the silicon vendors, doesn’t eat into their profits, and helps them sell more chips.
If UEFI can demonstrate it can do that, it will have an advantage. If it can’t demonstrate a good benefit, then it won’t displace the current field of ARM solutions. In the PC world, it has shown a reasonable value benefit over legacy BIOS while having the support of Intel and AMD.
Also, I’d like to point out http://www.tianocore.org and the actual open source project at https://github.com/tianocore/edk2
For those interested, this isn’t a bad place to start. There are some CPU specific things missing like the assembly code that sets up the C environment before jumping into this file (like setting up cache as ram on Intel CPUs).
https://github.com/tianocore/edk2/blob/f0f3f5aae7c4d346ea5e24970936d80dc5b60657/UefiCpuPkg/SecCore/SecMain.c
When will someone take a specific motherboard, with an Intel or AMD CPU that has built in GPU+sound, and write a replacement UEFI that operates as a game playing machine?
A stripped down little board with a powerful CPU, decently powerful GPU, a NVME PCIe SSD, USB for control inputs, and there’s an arcade machine with games that are easy and inexpensive to swap out, or make a multi-game machine with the OS in place of UEFI.
Isn’t this pretty close to what the current Xbox, PS4 and PS5 do?
All of them have custom x86 SoC’s from AMD and boot their own specialized OS.
“A stripped down little board with a powerful CPU, decently powerful GPU, a NVME PCIe SSD, USB for control inputs, and there’s an arcade machine with games that are easy and inexpensive to swap out, or make a multi-game machine with the OS in place of UEFI.”
Do you realize how crazy this sounds, especially in the current market? Teh consoles and the Steamdeck are about as stripped down as they can get and thy still aren’t cheap enough to have one for each piece of software you want to run.
And UEFI is not suitable as an OS replacement, just the addition of virtual memory management, provaleges, and process isolation would be a huge undertaking. And very little of the UEFI code can run in that environment, it’s not designed to.
Is there a way to UEFI boot and remove resident services? My understanding is that boot services [usually] cannot be unloaded. How is this analogous to legacy BIOS option ROMs being resident? It appears they are in the same situation in this regard?
The OS is free to ignore the resident runtime services. They are limited to a handful of operations (get/set variables, get/set time of day, power off, and update capsule (firmware update), and none of them are required for the OS to run. If the OS doesn’t call into a runtime service then it does not get executed.
You can disable runtime services in Linux by passing `efi=noruntime` on the kernel command line.
UEFI secure boot (and it’s key management infrastructure) is one of the pillars of building a computer that’s actually resistant to attack while in possession of an attacker. A little trust (and lots of money to suppliers) goes a long way to building a system with strong security guarantees.
‘guarantees’ is miss spelled, it is spelled ‘promises’ :-/
If you want to vet all the source code of the Qubes stack so you can run an OS without trusting anybody by all means. I have things to build and risks to mitigate at finite cost.
I wonder how many people care about ‘actually resistant to attack while in possession of an attacker’. I know it isn’t a concern to me as a home user. Someone takes the computer … big deal (other than the aggravation and expense). Just needs to be replaced. I would take ‘performance over security’ any day of the week … within reason of course. Firewalls for example are essential … A lot of the CPU mitigations are not … or secure boot.
I find it ironic how we don’t bat an eye at security measures when they are on things like our phones, but when it happens on a PC, it’s a heinous crime. I wish people would fight as passionately about embedded platforms like phones or routers to provide the same sort of software freedom as we expect from our PCs. It would be interesting to see people rail as hard against ARM truszones as they do about secure boot or Intel Boot Guard.
When you replace a phone every couple of years, “security through obscurity” often seems good enough. I guess some people do that with laptops, but there are still those of us trying to get 5 years or more. I’m just talking about security here. And routers.. everything about home routers is substandard and insufficient. If only the industry cared… Of software freedom, again, the smartphone space just moves so fast… the think about x86 and x86-64 is that much of the firmwares get reverse engineered, at least enough to install linux or bsd. With a large enough userbase, it becomes a matter of time. With smartphones, I think as demonstrated by CalyxOS, GrapheneOS, by the time everything is secured, there’s whole other generations of Pixel. Or like Tizen and SailfishOS after Meego, they just limp along as hardware they were made for obsoletes (is that a word? lol)
I heavily dislike Secure Boot.
What about independent programmers, developers?
If Secure Boot becomes always-on, how can they continue writing or using their own hombrew OS?
Let’s just think of embedded scenarios, also, were VMs are not good enough.
If someone wants or or has to write custom software, say for an ATM or similar terminal, Secure Boot turns the PC’s master (user, administrator) into a slave (brainless consumer). This somewhat 1984, IMHO.
Honestly I like UEFI. I usually dual boot windows and Ubuntu and with bios there’s been times where windows updates then I have to go to hell and back to hammer grub back into the boot sector but with UEFI I’ve never had that happen and if it did happen I would just need to go into the UEFI settings, point it at grub’s EFI file then reboot.
sounds like you’ve never dealt with anything else, though. Of course UEFI seems great compared to BIOS. Check out the communities of Libreboot, which was only on obsolete hardware for a while, but now Librem (Pureboot), System76 and Starlabs. I mean, even Chromebooks’ Coreboot seems more secure than UEFI. I don’t have great technical knowledge, but comparing these boot firmwares gives me pause about the security of UEFI
Man, if y’all think UEFI is wild, just wait until you learn about option ROM and the fabulous constellation of tiny unsecurable Linux kernels living rent-free in the components of your system. :V
I thought that was only an old way for legacy PCI systems, I think there was even some UEFI option ROM cards with SecureBoot that improved security of some old BIOS systems. Still it seemed to me a problem of the past, not the future.
Which reminds me, I was disappointed that the article named Phoenix and not Compaq, or better yet Columbia Data Products’ MPC 1600, the “Multi Personal Computer,” which was the first to legally reverse engineer IBM’s BIOS. Unfortunately they didn’t think to be as IBM software compatible(PC-DOS compatible) as Compaq did,
I can understand the fear of unsecurable Linux kernels.
My fear is being restricted by “secure boot” on which kernels I’m allowed to use.
I think the point was more about the embedded minix os inside the firmware of many intel chips. When the CPU is itself just a virtual behavior provided by an emulator running on a simpler cpu, if there are bad actors involved, the problem happens LONG before the first byte of EFI code even runs.
It’s a reasonable idea: simple cpu runs emulator to add security features/vm isolation instructions, etc, and the emulator maps as many instructions as possible 1:1 with the simple-cpu instructions. Shadow instructions where behavior depends on virtual security state.
It’s much easier to implement complicated behaviors and considerations in code. Much harder to do that via gates and boolean logic… And when the new behavior has to touch large swaths of the cpu state machine, the debug and validation runs alone could take months of runtime.
Much easier to take the “sweet 16” idea and use it to emulate security/virtualization state isolations, and weld it inside the cpu. Write it fast, ship it, and if a bug shows up later, ship an updated cpu firmware.
Problem is, it creates a new class of hidden hazards that are potentially impossible to detect, because there is no practical way to introspect the runtime state of the firmware that “implements” the cpu you think you are running on.
In my experience with UEFI, I can conclude that its “evils” far outweigh any alleged benefits to consumers. I mean, from the perspective of an end-user, UEFI et al serves no functional purpose other than:
Making it difficult to boot installation media. Since the quick-boot feature is so…quick, it usually skips the “press xx to enter setup” or “press xx to enter boot menu” prompts. This is nice and all, but unless you research how to perform these functions beforehand, you can’t know what you don’t know. Also, it allows the host OS to disable/lock-out such features, making entry nearly impossible on a system that won’t boot.
It’s FULL of sneaky tricks: Even if the CPU is x64, the UEFI will only accept an x86 bootloader. They do this a lot on tablet-based PCs to restrict OS choices and prevent you from installing a 64bit OS, for some reason.
Ever wonder how M$ used to lock users out from being able to install an alternative OS on certain laptops? The magic of UEFI!
Also, UEFI stores wifi passwords and AP names, GPS location data, the serial number of the machine or board, a small log of access times, usernames and passwords, and even personal identifying data. Don’t believe me? Reverse the EFI on a Macbook or any UEFI laptop; most of the information isn’t even encrypted and is stored as strings.
UEFI is the way manufacturers are slowly taking away control of the platform and giving themselves (and other special interests, like OS manufacturers) more control over their product.
Did you know that if you just clone the firmware from a Macbook, it also changes it’s serial number? And did you know that if you remotely lock one of those cloned laptops, it locks both of them? The same is true for most laptops. Killswitches can be nice when used virtuously, but a malicious actor could quite easily exploit such a system and cause havoc world-wide. Image what would happen if one day, nearly every laptop sold within the last decade suddenly did a lock and wipe? That’s a little too much “manufacturer control” for my tastes.
This is quite plausible especially with certain proprietary implementations, which can only be updated with a copy of Windows (Wine can’t do the peeking and poking yet).
Been working on my own ARM based computer system for a bit. I know all this is very plausible.
I’ve worked with Minnowboards too which are bring-your-own-UEFI based devboards, I sadly bricked mine a while ago because of some 1.8V-3.3V frying of the CMOS SPI chip… (making is learning from your failures though)
My hands are very shaky when it comes to SMT soldering.
It’s hard not to think of UEFI as the systemd of boot firmware.
“The biggest complaint with UEFI is that it is a closed black box with unimaginable access to your computer and stays resident after the computer boots.”
“Ultimately it is up to you whether you care about the boot process of your device.”
Being that it stays resident the second quote is not entirely finished.
It’s about so much more than the boot process.
I am researching into this subject now because im working on a personal linux distro.
I am planning on integrating all the features I can of uefi into the distro.
If it’s running on my gear I demand control of it rather than leaving it open for exploitation.
I think it has some great potential to enhance linux systems.
I am interested in links to all available data regarding the subject.