The Trouble With Intel’s Management Engine

Something is rotten in the state of Intel. Over the last decade or so, Intel has dedicated enormous efforts to the security of their microcontrollers. For Intel, this is the only logical thing to do; you really, really want to know if the firmware running on a device is the firmware you want to run on a device. Anything else, and the device is wide open to balaclava-wearing hackers.

Intel’s first efforts toward cryptographically signed firmware began in the early 2000s with embedded security subsystems using Trusted Platform Modules (TPM). These small crypto chips, along with the BIOS, form the root of trust for modern computers. If the TPM is secure, the rest of the computer can be secure, or so the theory goes.

The TPM model has been shown to be vulnerable to attack, though. Intel’s solution was to add another layer of security: the (Intel) Management Engine (ME). Extremely little is known about the ME, except for some of its capabilities. The ME has complete access to all of a computer’s memory, its network connections, and every peripheral connected to a computer. It runs when the computer is hibernating, and can intercept TCP/IP traffic. Own the ME and you own the computer.

There are no known vulnerabilities in the ME to exploit right now: we’re all locked out of the ME. But that is security through obscurity. Once the ME falls, everything with an Intel chip will fall. It is, by far, the scariest security threat today, and it’s one that’s made even worse by our own ignorance of how the ME works.

The Beginning of Intel’s Management Engine

In her talk at last month’s CCC, [Joanna Rutkowska] talked about the chain of trust found in the modern x86 computer. Trust is a necessary evil for security, and [Joanna] contrasts it with the normal meaning of the word, for which she uses “trustworthy”. If you can see the source code for your application, you can verify that it’s trustworthy. But since the application runs on top of the operating system, you have to trust the OS. Even if the OS is verified and trustworthy, it still has to trust the BIOS and firmware. As you keep digging down like this, verifying each layer, you eventually get to some part of the system that you can’t verify and just have to trust, and this root of trust is the role that the ME is trying to play.

trustedstick
[Joanna Rutkowska]’s plan for a ‘trusted stick’, offloading the root of trust to a small USB device
This root of trust on the modern computer is, quite simply, untrustworthy. Instead of a proper BIOS that can trace its origins to the first x86 computers, computers today have UEFI and Secure Boot, a measure designed to only allow signed software to run on the device. Secure Boot can be disabled from the manufacturer, and security isn’t secure if it’s optional, and even less so if there are exploits for specific implementations of UEFI.

[Joanna]’s plan for truly trustworthy computing is a simple USB thumb drive. Instead of holding data, this thumb drive contains security keys. The idea behind this ‘trusted stick’ is that the root of trust can be built from this stick, and these keys are something that you own and control and can presumably keep secret. Everything else above that is verifiable, and thus doesn’t need to be trusted. It’s an interesting idea, but right now it’s just an idea. And it stands in contrast to the current situation where Intel somehow bakes the trust into the chip for you.

What the Management Engine Is

The best description of what the Management Engine is and does doesn’t come from Intel. Instead, we rely on [Igor Skochinsky] and a talk he gave at REcon 2014. This is currently the best information we have about the ME.

The Intel ME has a few specific functions, and although most of these could be seen as the best tool you could give the IT guy in charge of deploying thousands of workstations in a corporate environment, there are some tools that would be very interesting avenues for an exploit. These functions include Active Managment Technology, with the ability for remote administration, provisioning, and repair, as well as functioning as a KVM. The System Defense function is the lowest-level firewall available on an Intel machine. IDE Redirection and Serial-Over-LAN allows a computer to boot over a remote drive or fix an infected OS, and the Identity Protection has an embedded one-time password for two-factor authentication. There are also functions for an ‘anti-theft’ function that disables a PC if it fails to check in to a server at some predetermined interval or if a ‘poison pill’ was delivered through the network. This anti-theft function can kill a computer, or notify the disk encryption to erase a drive’s encryption keys.

These are all extremely powerful features that would be very interesting to anyone who wants or needs to completely own a computer, and their sheer breadth makes the attack surface fairly large. Finding an exploit for the Intel ME will be difficult, though. While most of the firmware for the ME also resides in the Flash chip used by the BIOS, the firmware isn’t readily readable; some common functions are in an on-chip ROM and cannot be found by simply dumping the data from the Flash chip.

This means that if you’re trying to figure out the ME, a lot of the code is seemingly missing. Adding to the problem, a lot of the code itself is compressed with either LZMA or Huffman encoding. There are multiple versions of the Intel ME, as well, all using completely different instruction sets: ARC, ARCompact, and SPARC V8. In short, it’s a reverse-engineer’s worst nightmare.

The Future of ME

This guy wants information on the Intel ME. Also, hackaday has an istockphoto account.
This guy wants information on the Intel ME. Also, Hackaday has an istockphoto account.

With a trusted processor connected directly to the memory, network, and BIOS of a computer, the ME could be like a rootkit on steroids in the wrong hands. Thus, an exploit for the ME is what all the balaclava-wearing hackers want, but so far it seems that they’ve all come up empty.

The best efforts that we know of again come from [Igor Skochinsky]. After finding a few confidential Intel documents a company left on an FTP server, he was able to take a look at some of the code for the ME that isn’t in the on-chip ROM and isn’t compressed by an unknown algorithm. It uses the JEFF file format, a standard from the defunct J Consortium that is basically un-Googlable. (You can blame Jeff for that.) To break the Management Engine, though, this code will have to be reverse engineered, and figuring out the custom compression scheme that’s used in the firmware remains an unsolved problem.

But unsolved doesn’t mean that people aren’t working on it. There are efforts to break the ME’s Huffman algorithm. Of course, deciphering the code we have would lead to another road block: there is still the code on the inaccessible on-chip ROM. Nothing short of industrial espionage or decapping the chip and looking at the silicon will allow anyone to read the ROM code. While researchers do have some idea what this code does by inferring the functions, there is no way to read and audit it. So the ME remains a black box for now.

There are many researchers trying to unlock the secrets of Intel’s Management Engine, and for good reason: it’s a microcontroller that has direct access to everything in a computer. Every computer with an Intel chip made in the last few years has one, and if you’re looking for the perfect vector for an attack, you won’t find anything better than the ME. It is the scariest thing in your computer, and this fear is compounded by our ignorance: no one knows what the ME can actually do. And without being able to audit the code running on the ME, no one knows exactly what will happen when it is broken open.

The first person to find an exploit for Intel’s Management Engine will become one of the greatest security researchers of the decade. Until that happens, we’re all left in the dark, wondering what that exploit will be.

93 thoughts on “The Trouble With Intel’s Management Engine

    1. Think of your CPU as having a tiny CPU inside it with access to everything, but you have no access to it. That is the ME, it does whatever Intel, a US company, has told it to do.

      1. It’s bad for other countries as it has been shown US companies often will do what ever the NSA or CIA asks of them.
        This is why China and Russia have been working at developing their own fully domestic CPU designs.
        In the end the sort of security though obscurity and deep government involvement is bad for US manufactures and can even spell the end of the dominance of US companies in the chip market as competitors only need to produce something that’s just good enough but can be trusted to not have a back door for people to start jumping ship.

          1. They’re designing it to use on their own internal systems. They probably have backdoors for everything exported or for citizen use, but they need to /not/ have one for their governmental systems. Russia is using mechanical typewriters again now.

        1. I totally agree with what your saying, but I think business models like ARM’s will be the future. What I mean is ARM design the chip then a company buys the rights to make the chip then they tweak it a bit then send the plans off to a foundry to create the chips. If all chips where made like this we would have more choice and with more choice reduces government pressure on companies, We then wouldn’t be locked into NSA/CSA backdoored chips (if that is what is going on).

          1. With “tweaking” comes compatibility problems. You can’t rely on the performance of the chip, so you get software balkanization as well, similiar to Linux distributions where everyone is playing by slightly different rules.

            That’s one of the good things that have come from the x86 wintel world.

          2. Interesting idea. However, what is preventing a man-in-the-middle attack here? Specifically, between the time you “send the plans off to a foundry”, and the time the foundry cranks out the chips, can you guarantee that your designs haven’t been mucked with?
            Because ARM designs are fairly standardized, an intruder could develop a somewhat standardized “hardware root kit” for the ARM chips. Then, when the designs are sent to the foundry, they are intercepted, the root kit is inserted into the design, and no one is the wiser. The intruder could devise several of these hardware root kits, so even if the methods of detecting one are developed, there are others that cannot be detected. Done properly, neither the foundry, nor the design originator would realize the design was tampered with.
            Even if you discover the hardware root kit, by the time you devise a solution to detect it, re-send the new design to the foundry, test against tampering at the factory, and finally get the chip manufactured, you could be delayed up to two years. By which time your custom ARM will be almost out of date.
            Eventually, just the possibility of having such a hardware root kit injected into one’s design will be enough to deter some people from even attempting to make their own ARM chip. And, who is to say that most of the ARM chips out there haven’t already been compromised in such a manner?
            Paranoia is a very expensive hobby. You would have to OWN your OWN chip design, and your OWN foundries. And all of it would have to run on your OWN computers with no back doors. This is a tall order even for a nation state, and not just because of the expense. Because if you really think about it, it’s quite a chicken-and-the-egg problem.

          3. Backdoors, Frontdoors, Windows, Dryer Vents, Chimneys, Key Under the doormat, inside the fake rock, the potted plant! This is usually hire proper security testers and programmers before you send to foundry. And as you well know there are quite a few companies that are fabless. Also we have seen companies play they are “closed source” only to find they have the same bugs from open source (a.k.a. Phuq Allwinner, that company should be dead to us all). If you are willing to go so far as to build a chip and or license the respective IP modules. It would stand to reason you’d a.) wouldn’t design by committee b.) have a huge organization c.) have a dedicated 3rd party reverse engineer company work to cat-whisker, bus signals, bus current, sig-int signature, fritz, fuzz, frunk, delid, decap, deblob, acid etch, x-ray, electron microscope, black box test, white box test,etc your chip.

            Finally, Design security as PRIMARY function not an ad-hoc add-on but as a primary concern. Case in point FTDI counterfeit chips article we have here.

            Sure, A hardware back-door is very viable but a sane company would know that the timing and voltage on the chip has changed.

      1. How about the old adage “what’s-old-is-new-again” ? What’s needed is a step back from this “online-everything” mindset, and return to the old days of “mainframe” architecture where *dedicated leased lines” were used between the ‘datacenter’ and the remote user terminal. Unless you knew the specifics of what backhaul circuit goes where, you can’t “hack” anything (yes, “war dialing” and modems not withstanding – ‘back-in-the-day’, our defense against that was the system was configured to drop/disallow connections and/or send a message requesting the user to call their “support team” – authorized users had that info).

        If you remove all this “connectivity”, we wouldn’t have such a large attack surface. Screw ‘customer convenience’. I think a return to the old days of a bank having those dedicated leased lines and you having to walk into an office to get account information is a much more secure model – vs – having any joe schmoe at home being able to access the entire customer database… all for the sake of ‘convenience’.

        1. Come the f* on, you’re just an old guy wishing for a long-gone tech to come back again because you liked it and are uncomfortable with new stuff.

          A mainframe architecture would be NSA paradise.

          Also, customer convenience is something you’ll have to live with. It’s not HTTP’s fault that a server was coded in PHP. Blame what must be blamed.

          1. Seriously, old tends to be better?

            What a load of Horseshit. A lot of the old stuff is broken, what hasn’t is stuff that was either over-engineered, or made so sturdy, because they couldn’t make it any other way. Anything built that way can be recreated, if you wanted to today. For most of the old stuff that survives, there are 10 more that didn’t.

            So let’s look at some heavy metal. A battleship from the 1910s would be destroyed by a US battleship built in the 1940s. If for no other reason than the advances in computing, and the brand new fire control computer.

            A plow today, while more expensive, can do a ton more than a plow could in the past, because it’s mechanized. Machine tools today are incredibly more precise and accurate if you want it to be. Communications, let’s not go there, to the point where old stuff basically is a minimal subset of what can be done. Also, that old stuff has locked some of the new stuff out of using it’s full capabilities, because of things like limitations on frequencies, to support the old stuff. Why aren’t we using spark gap transmitters anymore?

            About the only places that old is better is for some forms of security, simply because it’s less effective at doing anything at all. With a modern system, that’s properly secured. (Aka, money is little object at all), even if there is a breach, you can tell exactly what was accessed. With most conventional old tech, that’s something that’s not possible with a breach. With new tech, even if there was a physical breach, it’s quite likely that the contents are inaccessible, if things like encryption are used. Yes, there are potentially other problem areas.

    1. There are some known highly unorthodox methods for bypassing an air gap. I’ve heard everything from modulating existing EMF interference to communicating messages through the thermal effects of varying workloads. (and everything in between) Some are better for unmanned datacenters where you have computers sitting in stable environments for a long time, to stuff like the emf manipulation that would on the computer in front of you.

  1. Wow, ARC? That was the basis of the Nintendo FX chip and some sound cards wasn’t it?

    And SPARC too? At the core of every Intel PC there’s a small SPARC core doing its best to make it secure/

    1. ARC has a long history in deeply embedded devices since the Nintendo days. Most CF and many other flash card devices have an ARC processor embedded in them, doing the flash management etc. Intel processors have had ARC processors embedded for years doing things which benefit from being programmable, but which aren’t user programmable and are invisible to the operating system on top. I’m not saying the following are what they are used for, but it’s the sort of thing they could be: cache prefetch prediction, MMU page fault handling, interrupt management, early configuration of the DDR controllers, etc.

    1. All modern CPU’s have something similar, it is because CPU’s have got so complex. And cost so much to design and manufacture. Part of it is to disable parts of the Silicon. Make all 32 core CPU’s most of the cores failed, no problem, just get them to display 4 cores and sell them as a different product. Some cool new caching feature failed at the silicon level, disable it.

      There was a good CCC talk which indirectly explains why this is needed in CPU’s
      https://media.ccc.de/v/32c3-7171-when_hardware_must_just_work

      1. Yes, but is the AMD equivalent part designed to be accessed from the outside through the network?

        The Intel ME has its own MAC and direct access to the network gear. Some versions of Intel chips even have built-in 3G connectivity supposedly for theft prevention etc. with access to the ME.

        It’s a completely different thing if you have a helper controller inside that can be accessed and programmed by the factory in a special test bench, and an whole other thing completely when it’s exposed to the world when the CPU is actually in use.

          1. Whoa! That was slightly depressing reading. :-\

            To sum it up, we, as users of modern computers, are practically not allowed to have have control of our hardware or our data.

            It seems that the “market” (i.e. the suppliers) have decided that we do not want that. A pat on the head and “Only trusted people can access your data, and only in extreme circumstances”. Security by obscurity is a ticking bomb.

            In other words one could *expect* any modern computer to quite possibly be leaking as soon as there may be something sufficiently interesting on them. Highly likely not only to western intelligence agencies, but also, equally important, not entirely unlikely to criminals too.

            I have no problem imagining rumours about Russian intelligence using typewriters for more sensitive documents again to be true.

  2. My ignorance based idea for safe computers is start from ARM; build each subsystem on licensed open ARM chips. NICs, CPUs, GPUs, MMUs, USB ports, use ARM chips for all, write software to run it, be rid of proprietary software, firmware, hidden processors. Even our USB memory sticks have MMUs that can & do run malwares. Proprietary chips prevent safe computer use. ARM is just 1 way that safe computers could be built.
    I hope you have a better idea how to build a safe computer and are more able to move toward operating safe computers!

      1. I’d really like to see it become more open source. As was mentioned in the recent article on open source computing, it isn’t entirely open source in a few wasy. The one pictured, for example, boots from an SD card. I’d like to see someone try to make a *working* drive or card to boot from, or perhaps re-write the firmware of an existing SD card.

    1. Having open source chips isn’t in itself a guarantee that the system is safe, because you still have to audit a circuit with potentially billions of transistors. In the end, there’s no way to be absolutely sure what it does because even the designers cannot be absolutely sure that it works the way it’s supposed to work.

      It’s like trying to find a bug in a billion lines of code – not an easy task – and the bad habit of open source development is to skip the auditing and testing because nobody’s paying you any money for it. A billion uninterested eyes find no bugs, whereas a few hundred highly motivated criminals and/or government agencies do.

      That’s why it’s a kind of mixed blessing. If you have open source open everything, you -can- find the security holes, but it’s a whole other question whether -you- will find it first or the bad guys.

    2. And what stops some hacker from becoming one of developers, and then adding some well-hidden backdoor for one of governments or agencies? Wasn’t there a NSA-sponsored backdoor hidden in Linux/Unix kernel for years? I read something about that last year.
      Finding and fixing bugs is hard enough. I suppose that finding a backdoor hidden on purpose, especially when you are not looking for it, will be much harder…

      1. No, there was no such back door, and the entire concept of opensource makes that impossible.

        Not to mention, with a lot of big projects, changes are logged and reviewed as they’re made, and it’s easy to do.

        1. That’s a strong assertion there, M.
          I’m not sure how you can defend that position. Open software is enormously better than proprietary software, but both are capable of having long-term bugs. To an uninformed observer, the difference between a backdoor and a mere bug is zero.

  3. The black market value of any exploit is going to way higher than the fame. That is the reason that trust is so important, because if there is an issue, we’re likely to never hear about it.

    Also, How is this tagged Echo and the Bunneymen? Did I miss some joke?

  4. “Thus, an exploit for the ME is what all the balaclava-wearing hackers want, but so far it seems that they’ve all come up empty.”

    How do we know they have come up empty?

    Criminal organisation around the world won’t shout loud if they break the ME, they will use it for their profit quietly.

    1. As if the SPI in the picture is going to be……………..SAY. JUST HOW is this dongle supposed to work anyway? It’s a brick full of keys, but how it it not just handing them over to whatever black hardware is lurking in the computer? Being outside the computer, how is it supposed to be able to vet anything? Never mind that a compromised computer/hardware/whatever could behave trustworthy, what is even the criteria for trustworthyness, and why are it’s key important? What do they unlock? What does this brick even do? Is it supposed to be a boot device? How can it get around hardware exploitation then? How can we trust it?

      I don’t see how this could work at all, and is anything beyond someone’s idle what-if to the problem: “well what if the thing doing the checking were outside the computer instead of inside?”

  5. Let me put it this way: would YOU, as a celebrity, be happy to employ a chief of security in charge of your protection who answered to NO ONE? No one to the best of your knowledge, that is – whether the guy has any agenda of its own or not, you cannot possibly know about it. He might be your faithful servant, or he might be planning to package you up next time you climb into your limo and deliver you to whoever has him bought or blackmailed. This sort of thing would only work if you had blind faith in this guy, and these days faith of any sort is in VERY short supply. So thanks but no thanks. Never had an Intel chip “inside”, not about to start now…

    1. If I was the NSA or CIA there are five companies I would have a secret FISA court grant full access, which would save the US taxpayers trillions of dollars:
      Intel
      Cisco
      Microsoft
      Google
      Facebook

          1. just pointing out that the fact that it is rather confusing to use the words Intel (as in intelligence), and Intel (as in the company) in a comment about governments gathering Intel by exploiting Intel chips, posted on an article about Intel management engine (which could be leaking Intel)

  6. ME always gave me the creeps for the above reasons so I never used it.
    On bios I wish Open Firmware won out instead of EFI as it would be less of a mess.
    The industry depending so much on security through obscurity is a disaster waiting to happen.

    1. The other version I like to call “security through naivety”

      Because once you put your trust on something that everyone can poke and shake, you start a race of who finds the security flaws first, and the adversaries you’re racing against are better funded and motivated than you are.

      With open source, you still have to pay people to find all the security holes, which generally doesn’t end up happening.

      1. Good name for it as any security obscurity may provide eventually disappears when something becomes common enough there will be a lot of people attacking it.
        Of course it gets worse if they put a back door for the government to spy as people will eventually find it.
        Besides who in their right mind would trust the government with that kind of power anyway as it will be abused.

        1. You misunderstand. “Security through naivety” is when you publish all your stuff in hopes that somebody else would help audit and find the bugs for you, for free, and not try to take advantage of it. That’s what the Open Source security model basically is.

          The ideal case for security would be software code that only -you- can see. The next best case is code that a trusted partner can see but nobody else, and the least secure case is when everybody can see it. Think of it as if the Allied military command had told the Nazis when and where the D-day landing is going to be in the illusion that their superior forces can punch through anyways.

          Well if you Open Source your warplan, it has to be even more foolproof than in the case you kept it a secret, because the enemy will spot details and flaws that you didn’t.

          1. Bruce Schneier pointed out that everyone can design cryptography that they themselves cannot find a fault with. If you’re very good, then OK, inspect the code to assure yourself that it’s safe. All the rest of us will prefer code that was peer-reviewed. Do you know that all the crypto that US government recommends (DES, AES, SHA, etc) was chosen in an open competition, by submitting all the candidates to public scrutiny for extended periods of time? Open Source is indeed the best practice, not naivete.

      2. You mean like how DVD encryption has only 40 keys, because the short sighted people who designed it for some reason figured there would never ever be more than 40 different companies wanting to make encrypted DVDs – then to make the scheme work all 40 keys had to be included on a chip in *every* DVD player and computer DVD drive.

        So when someone cracked the code, with the help of those ‘sekrit keez’, the DVD industry cried foul. It’s like handing a copy of your private journal to a billion people, with “Do Not Read” on the cover.

        Someone is going to read it.

        1. They knew it was going to get hacked eventually.

          Most copyright regimes include a clause that says you’re not allowed to bypass copyright protection and DRM, and they were simultaneously lobbying said laws to that end. It was all after the fiasco of trying to copy-protect CDs with a bogus track that would confuse CD-RW drives, so they figured “hey, it’s never going to work, but we can pretend to put effort to it and then simply throw lawsuits around”.

          1. Exactly. Your DRM scheme doesn’t actually have to work. You just have to prove that you put in an effort, and then you have legal grounds to sue anyone who breaks it. Intent to prevent copying is more important than effectiveness at preventing copying.

  7. ***Problem with keys on dongle: Private keys for asymmetric or symmetric have to be there unless you use just public keys and a CA in which case you rely on the security of the CA.

    I you use a hardware oracle you then have to have the OS integrate it transparently like in to the PE or ELF loader and BIOS firmware. Even then you have to worry about malware automating steps and using memory corruption.

    ***Learning TXT, ME, or SGX internals: You can spend months in IDA Pro with the Intel SDKs.. I just did it with SGX for Skylake arc. There are also microcode packages.

    ***”balaclava-wearing “: Uh oh malware and security talk that means time for scary European and Russian stereotypes that don’t fail to deliver on hypocrisy..

    DRM is actually way ahead. They have had oracle type USB dongles and advanced code execution protection for years. Look at the piracy statistics for the xbox 360 which hashes all of RAM for all vital processes, then you have DRM dongles like Guardant solutions and some older ones that decrypt byte-code VMs.

    You can actually do what a xbox 360 does with a PCIe solution and not even the most advanced ROP heap exploit would work even in ring 0

    1. CONCLUSION&SOLUTION

      One of the reasons I can’t suggest anything is because something on PCIe that provided services to a host OS(like isolated signature checking and page-table protection(hash everything)) would probably be broken by CPU cache mechanism. Unless kernel page faults are better than advanced code execution and malware in your eyes..

  8. #include
    #include
    #include

    #pragma pack(1)

    #define TRUE 1
    #define FALSE 0

    typedef unsigned char BYTE;
    typedef unsigned short WORD;
    typedef unsigned int DWORD;
    typedef unsigned char BOOL;

    #define ARC_MAX_BITS 12

    #define CT_NONE 1
    #define CT_7_BIT 2
    #define CT_8_BIT 3

    class CArcEntry
    { public:
    CArcEntry *next;
    WORD basecode;
    BYTE ch,pad;
    };

    class CArcCtrl //control structure
    { public:
    DWORD src_pos,src_size,
    dst_pos,dst_size;
    BYTE *src_buf,*dst_buf;
    DWORD min_bits,min_table_entry;
    CArcEntry *cur_entry,*next_entry;
    DWORD cur_bits_in_use,next_bits_in_use;
    BYTE *stk_ptr,*stk_base;
    DWORD free_index,free_limit,
    saved_basecode,
    entry_used,
    last_ch;
    CArcEntry compress[1<<ARC_MAX_BITS],
    *hash[1<>3;
    bit_num&=7;
    return (*bit_field & (1<>3;
    bit_num&=7;
    result=*bit_field & (1<<bit_num);
    *bit_field|=(1<<bit_num);
    return (result) ? 1:0;
    }

    DWORD BFieldExtU32(BYTE *src,DWORD pos,DWORD bits)
    {
    DWORD i,result=0;
    for (i=0;ientry_used) {
    i=c->free_index;

    c->entry_used=FALSE;
    c->cur_entry=c->next_entry;
    c->cur_bits_in_use=c->next_bits_in_use;
    if (c->next_bits_in_usenext_entry = &c->compress[i++];
    if (i==c->free_limit) {
    c->next_bits_in_use++;
    c->free_limit=1<next_bits_in_use;
    }
    } else {
    do if (++i==c->free_limit) i=c->min_table_entry;
    while (c->hash[i]);
    temp=&c->compress[i];
    c->next_entry=temp;
    temp1=(CArcEntry *)&c->hash[temp->basecode];
    while (temp1 && temp1->next!=temp)
    temp1=temp1->next;
    if (temp1)
    temp1->next=temp->next;
    }
    c->free_index=i;
    }
    }

    void ArcExpandBuf(CArcCtrl *c)
    {
    BYTE *dst_ptr,*dst_limit;
    DWORD basecode,lastcode,code;
    CArcEntry *temp,*temp1;

    dst_ptr=c->dst_buf+c->dst_pos;
    dst_limit=c->dst_buf+c->dst_size;

    while (dst_ptrstk_ptr!=c->stk_base)
    *dst_ptr++ = * — c->stk_ptr;

    if (c->stk_ptr==c->stk_base && dst_ptrsaved_basecode==0xFFFFFFFFl) {
    lastcode=BFieldExtU32(c->src_buf,c->src_pos,
    c->next_bits_in_use);
    c->src_pos=c->src_pos+c->next_bits_in_use;
    *dst_ptr++=lastcode;
    ArcEntryGet(c);
    c->last_ch=lastcode;
    } else
    lastcode=c->saved_basecode;
    while (dst_ptrsrc_pos+c->next_bits_in_usesrc_size) {
    basecode=BFieldExtU32(c->src_buf,c->src_pos,
    c->next_bits_in_use);
    c->src_pos=c->src_pos+c->next_bits_in_use;
    if (c->cur_entry==&c->compress[basecode]) {
    *c->stk_ptr++=c->last_ch;
    code=lastcode;
    } else
    code=basecode;
    while (code>=c->min_table_entry) {
    *c->stk_ptr++=c->compress.ch;
    code=c->compress.basecode;
    }
    *c->stk_ptr++=code;
    c->last_ch=code;

    c->entry_used=TRUE;
    temp=c->cur_entry;
    temp->basecode=lastcode;
    temp->ch=c->last_ch;
    temp1=(CArcEntry *)&c->hash[lastcode];
    temp->next=temp1->next;
    temp1->next=temp;

    ArcEntryGet(c);
    while (dst_ptrstk_ptr!=c->stk_base)
    *dst_ptr++ = * — c->stk_ptr;
    lastcode=basecode;
    }
    c->saved_basecode=lastcode;
    }
    c->dst_pos=dst_ptr-c->dst_buf;
    }

    CArcCtrl *ArcCtrlNew(DWORD expand,DWORD compression_type)
    {
    CArcCtrl *c;
    c=(CArcCtrl *)malloc(sizeof(CArcCtrl));
    memset(c,0,sizeof(CArcCtrl));
    if (expand) {
    c->stk_base=(BYTE *)malloc(1<stk_ptr=c->stk_base;
    }
    if (compression_type==CT_7_BIT)
    c->min_bits=7;
    else
    c->min_bits=8;
    c->min_table_entry=1<min_bits;
    c->free_index=c->min_table_entry;
    c->next_bits_in_use=c->min_bits+1;
    c->free_limit=1<next_bits_in_use;
    c->saved_basecode=0xFFFFFFFFl;
    c->entry_used=TRUE;
    ArcEntryGet(c);
    c->entry_used=TRUE;
    return c;
    }

    void ArcCtrlDel(CArcCtrl *c)
    {
    free(c->stk_base);
    free(c);
    }

    BYTE *ExpandBuf(CArcCompress *arc)
    {
    CArcCtrl *c;
    BYTE *result;

    if (!(CT_NONEcompression_type && arc->compression_typeexpanded_size>=0x20000000l)
    return NULL;

    result=(BYTE *)malloc(arc->expanded_size+1);
    result[arc->expanded_size]=0; //terminate
    switch (arc->compression_type) {
    case CT_NONE:
    memcpy(result,arc->body,arc->expanded_size);
    break;
    case CT_7_BIT:
    case CT_8_BIT:
    c=ArcCtrlNew(TRUE,arc->compression_type);
    c->src_size=arc->compressed_size*8;
    c->src_pos=(sizeof(CArcCompress)-1)*8;
    c->src_buf=(BYTE *)arc;
    c->dst_size=arc->expanded_size;
    c->dst_buf=result;
    c->dst_pos=0;
    ArcExpandBuf(c);
    ArcCtrlDel(c);
    break;
    }
    return result;
    }

    long FSize(FILE *f)
    {
    long result,original=ftell(f);
    fseek(f,0,SEEK_END);
    result=ftell(f);
    fseek(f,original,SEEK_SET);
    return result;
    }

    BOOL Cvt(char *in_name,char *out_name,BOOL cvt_ascii)
    {
    DWORD out_size,i,j,in_size;
    CArcCompress *arc;
    BYTE *out_buf;
    FILE *io_file;
    BOOL okay=FALSE;
    if (io_file=fopen(in_name,”rb”)) {
    in_size=FSize(io_file);
    arc=(CArcCompress *)malloc(in_size);
    fread(arc,1,in_size,io_file);
    out_size=arc->expanded_size;
    printf(“%-45s %d–>%d\r\n”,in_name,(DWORD) in_size,out_size);
    fclose(io_file);
    if (arc->compressed_size==in_size &&
    arc->compression_type && arc->compression_type<=3) {
    if (out_buf=ExpandBuf(arc)) {
    if (cvt_ascii) {
    j=0;
    for (i=0;ii && !strcmp(argv[i],”-ascii”)) {
    cvt_ascii=TRUE;
    i++;
    } else
    cvt_ascii=FALSE;
    if (argc>i) {
    in_name=argv[i++];
    if (argc>i)
    out_name=argv[i++];
    else {
    strcpy(buf,in_name);
    l=strlen(buf);
    if (l>2 && buf[l-1]==’Z’ && buf[l-2]==’.’) {
    buf[l-2]=0;
    del_in=TRUE;
    }
    out_name=buf;
    }
    if (Cvt(in_name,out_name,cvt_ascii)) {
    if (del_in) {
    sprintf(buf,”rm %s”,in_name);
    system(buf);
    }
    } else
    printf(“Fail: %s %s\r\n”,in_name,out_name);
    } else
    puts(“TOSZ [-ascii] in_name [out_name]\r\n\r\n”
    “TOSZ expands a single TempleOS file. The -ascii flag will convert ”
    “nonstandard TempleOS ASCII characters to regular ASCII.\r\n”);
    return EXIT_SUCCESS;
    }

  9. Intel is more Israeli than US these days. I suppose that’s good or bad. Enjoy the full on trojan in your processor.

    https://en.wikipedia.org/wiki/Intel_AMT_versions

    Currently, AMT is available in desktops, servers, ultrabooks, tablets, and laptops with Intel Core vPro processor family, including Intel Core i3, i5, i7, and Intel Xeon processor E3-1200 product family.

    I have one CPU like this and amt is “turned off”. Just means that *I* can’t use it. Reason I’ve stuck with AMD despite performance issues.

    Its “off” on every INTEL (lololololol) desktop/laptop cpu for quite a while now. You are probably vulnerable right now.

      1. Well they have strong motivations to keep the US fighting their wars in the middle east.

        But while they set up a war empire at the US’s expense, you can watch Shindler’s List and see how they’re surely the poor good guys, it’s not like they’re terrible people or anything, imagine that.

  10. “The first person to find an exploit for Intel’s Management Engine will become one of the greatest security researchers of the decade.” Or will become the wealthiest computer cracker in history as a few billion gets sucked from a whole lot of bank accounts.

  11. If Intel is approached by the govt (NSA, what-have-you) and offered tax breaks in secret-exchange for the ME keys, does anyone think that Intel would refuse? Why are we only talking about hackers all the time?

  12. And here is the press release of Intel addressing their locally and remotely exploitable gaping hole in the Intel Management Engine that has been around since, at least, 2008.

Leave a Reply to CRImierCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.