Sophie Wilson is one of the leading lights of modern CPU design. In the 1980s, she and colleague Steve Furber designed the ARM architecture, a new approach to CPU design that made mobile computing possible. They did this by realizing that you could do more, and quicker, with less. If you’ve use a Raspberry Pi, or any of the myriad of embedded devices that run on ARM chips, you’ve enjoyed the fruits of their labor.
It all began for Sophie Wilson with an electric lighter and a slot machine (or fruit machine, as they are called in the UK) in 1978. An aspiring thief had figured out that if you sparked an electric lighter next to the machine, the resulting wideband electromagnetic pulse could trigger the payout circuit. Electronics designer Hermann Hauser had been tasked with fixing the problem, and he turned to Wilson, a student working at his company.
Wilson quickly figured that if you added a small wideband radio receiver to detect the pulse, you could suppress the false payout, foiling the thief. Impressed with this innovation, Hauser challenged Wilson to build a computer over the summer holidays, based in part on a design for an automated cow feeder that Wilson had created at university. Wilson created this prototype computer that looked more like a hand-wired calculator than a modern computer, but the design became the basis for the Acorn System 1, the first computer that Hauser’s new company Acorn Computers launched in 1979.
Wilson had graduated from the University of Cambridge by this time and had joined Acorn as the lead designer. The System One was unusual in that it was cheap: priced at £65 (under $90) the computer was sold as a kit that the user would assemble and solder themselves. It was built around a 1 MHz 6502 CPU, with 1152 bytes of RAM.
Several new versions of this computer were launched in the following years, adding features like expansion cards. These were popular among enthusiasts, but none caught the public imagination in the way that the company hoped.
A Computer in Every School
Acorn’s big break came with the BBC Micro, a computer that was designed to accompany a computer literacy program run by the UK broadcaster. The BBC Micro was designed to be rugged enough for educational use, with a full-size keyboard, a BASIC interpreter, a modulator that allowed it to be connected to a standard TV and an interface to save and retrieve programs to a standard audio cassette recorder. It was a huge hit, selling over 40,000 machines a month and appearing in 85 percent of UK schools.
By this time, though, Wilson’s thoughts were shifting elsewhere. The BBC Micro used the same 6502 processor as their previous computers, but Wilson and others at the company were not satisfied with the amount of computing power this provided. So, they decided in 1983 to build their own CPU for future computers.
Several factors influenced this decision. One was a visit to the company that made the 6502, where they realized that one person was working on the next version of this CPU. This showed that you didn’t need a huge team to design a CPU: as long as you had a partner who could create the chip for you, it wasn’t that difficult. The second was a project called the Berkeley RISC project, which stood for Reduced Instruction Set Coding. The idea behind this was that if a CPU was built to only run a very small set of instructions, it could run faster and more efficiently. Rather than add more instructions to the processor itself, the operating system running on top of the processor would break tasks down into the simpler instructions that the CPU would run faster.
This idea appealed to Wilson. So, she and colleague Steve Furber designed their own instruction set, creating a simulator on a BBC Micro that convinced others at the company that the approach was worthwhile. They called this Project A, but it was later christened the Acorn RISC Machine or ARM.
Smaller, Faster, and Better
The architecture of their system was fundamentally different to most CPUs. Wilson and others had tested the 6502 and other similar processors and found that they could only handle a limited amount of data. Most CPU designers were adding more instructions to their chips, providing new ways that the CPU could handle and process this data. Wilson and Furber took the opposite approach, removing parts until they had the bare bones that were needed, creating a chip that was simpler and required less power than existing CPUs. This meant that it was much easier to make the CPU deal with bigger numbers. Because the architecture was simpler, you could more easily create 16 or 32-bit CPUs. By creating less, Wilson and Furber produced a chip that could do more.
Let’s take an example — one that Wilson uses herself. The 6502 CPU that she used in the BBC Micro would take 2 clock cycles, or about 1 microsecond to add two 8-bit numbers together. But when you start using the larger numbers that most computing tasks require, the 6502 is hobbled by having to deal with these numbers in 8-bit chunks. That’s because the 6502 only works with 8 bits of data at once (called the data bus width), so it needs to chop up bigger numbers into 8-bit chunks and add these chunks together individually, which takes time. In fact, the 6502 would need 26 clock cycles to add together two 32-bit numbers.
You could build a version of the 6502 that would have a larger data bus width, but that exponentially increased the number of transistors that the chip required: you quickly end up needing millions of transistors to handle the complex operations on these bigger data chunks. Alternatively, you could do what Steve Wozniak did with Sweet16, a hack he wrote for the Apple II (which used the same 6502 processor as the BBC Micro) that effectively created a virtual 16-bit processor. The problem was that this ran at a tenth of the speed of the 6502.
By contrast, the first ARM CPU that Wilson and Furber built had a 16-bit data bus width and ran at a faster clock speed than the 6502, so it could add two 32-bit numbers in nine clock cycles, or about 125 nanoseconds. And it could do that on a chip that wasn’t much bigger than the 6502. It could do this because the simpler architecture was easier to scale up to run with the bigger data bus widths. Because it only had to process a small number of instructions, the chip was simpler and faster.
Wilson and Furber designed a CPU, graphics chip, and memory controller that worked together to create a complete system for testing, which was delivered in 1985. When Furber decided to measure how much power this test processor was using, his multimeter failed to detect any power flow. Furber investigated, and realized that the development board they were using was faulty: it was not delivering any power to the CPU. Instead, the processor was quite happily running on the power delivered over the signal lines that fed data into the CPU.
Acorn quickly realized the potential of this design and moved to patent the techniques it used. This created the first practical RISC architecture, called ARM V1. This has been through several iterations since, but the fundamentals remain the same: a small number of instructions that can run quickly are more efficient than a lot of instructions that take a long time to run.
While Wilson was creating the first ARM CPUs, Acorn itself was in trouble. The BBC Micro, while popular, was expensive to produce, and production problems had meant that the company missed the important holiday buying season in 1983. Although over 300,000 BBC Micros had been ordered, only 30,000 were delivered by Christmas 1983. On top of that, the company had borrowed significantly to scale up production and develop the follow-up model, the BBC Master. One creditor had grown frustrated and tried to shut the company down, a process that leads to layoffs and financial issues that eventually mean that Acorn was sold, passing through the hands of a number of different companies and spinning off the ARM part of the company, which quickly became worth more than Acorn itself.
The ARM architecture itself took some time to become popular, but the main driver for this was mobile computing. Because power is at a premium in a mobile device, the ARM architecture is ideal, as it can run more operations on less power than more complex chips. In fact, the ARM architecture is still used on most mobile phones, laptops and other devices, with companies like Apple, Samsung and many others licensing the ARM architecture for use in their mobile processor.
Wow… The lack of historical acknowledgment of Bill Mensch and The Western Design Center (which is still in business btw) is just an injustice to the piece. Sadly this small reference made a huge impact to the success of Ms. Wilson.
That’s not the only ‘omission’ or ‘inaccuracy’.
A little too much artistic license perhaps?
At least WDC gets a mention as “the company that made the 6502” (yeah I know it should be the 65C02).
But the article makes a giant leap from the System 1 to the BBC Micro, skipping at least the Acorn Atom but possibly others.
Oh well; there are plenty of other sources with more accurate and complete versions of this and more.
===Jac
It may glaze over the System 2-5 and Atom but those systems were all* 6502-based systems, so no significant development in the CPU.
*There was the 6809 CPU card available for the System, but that had a marginal effect on the development of the ARM. You can say it led Acorn to include the Tube on the BBC Micro for connecting external coprocessors (which was used for developing the ARM) but that’s getting into a deeper history than just describing the RISC/ARM philosophy.
Sort of left out the most important advantage of RISC, that being that each instruction (is supposed to) take the same number of clock cycles thereby allowing for pipelining hardware, and eventual super-scalar execution architecture. Simper is better when having to anticipate pipeline stalls, (load / store architecture wins here – as opposed to register memory architecture ) and keeps the hardware design manageable. The opposite of this is X86, which is a nightmare of variable length, variable clock cycle madness. If you look at (some of) what Intel and AMD have done to make this LSD-induced dream of a super-scalar multi-core x86 instruction running chip a reality it would make you insane. Yet, oddly enough, they (CISC-ish) X86 wins the CPU battle.
You really posted this to the wrong article. ARM instructions have a very variable latency, this isn’t the original MIPS.
Examples:
LDM/STM
MUL/MLA
X86 doesn’t change anything when it comes to multi core design. The memory model is better than some RISC which reduces complications.
The main problem is decoding and modern designs use parallelism to handle the problem of length decoding. When instruction lengths are known each instruction can be routed to an array of parallel decoders. Not simple of course but other than a little extra power consumption and a little extra delays in decoding (hidden mostly by industry leading branch predictors) making x86 superscalar isn’t a great problem.
ARM v1 didn’t have MUL or MLA – they came with v2.
Steve Furber was hardly a second-rater either – see his work on Amulet and more recently some neuron thingy.
Interestingly, Acorn also pondered switching to the i860 at one point.
ARM1 wasn’t made comercially available: it was ARM2 that first made it into the wild (in the Archimedes).
The ARM1 was sold as part of the ARM Evaluation System, which was one of the many co-processors available for the BBC Micro (it was also available as an expansion card for a PC.) It was released in 1986, a year before the Archimedes was launched, and was intended for developers looking to start writing software for the new machine.
You think the memory model of X86 is better than ARM? Really? Segment and offset??
if you look at the instruction set reference
http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0439b/CHDDIGAC.html
you will see that the majority of the instruction set operates in 1 cycle. The main exceptions
are “repeated” instructions like the ones you reference that move multiple registers to memory,
and instructions that will cause a pipeline stall. (like multiply and divide) In addition, most all
arm instructions are encoded in a very rigid format, first 4 bits being condition flags, then
(roughly) 8 bits of opcode, 4 bits of source register, 4 bits of destination register and then
an eleven bit operand. This fixed-length makes building the decode-execute hardware a lot simper.
It allows the hardware designer to “know” that you can fetch the entire instruction with one
32-bit read from memory, and that makes the instruction decoder allot simpler to build.
The aim is to build an instruction set that makes the hardware much simpler so either there
is less of it (cost / power reduction) or more of it to be used in other areas (wider ALU’s and wider /
more registers)
As for MIPS, the intent of that design was to further reduce the complexity of the CPU hardware
(thereby allowing for more transistors to be used as cache and registers) by having the compiler
insert No-Op’s when there was going to be a pipeline stall. This architecture lead to the funny
acronym (R)elegate the (I)mpossible (S)hit to the (C)ompiler. It also had the nasty effect of wasing
a bit of memory with a bunch of NOPs. Modern MIPS processors handle the pipeline
stalls internally and no longer need the NOP’s. MIPS also brought us the windowed register file,
trading all that (original) pipeline logic for a FIFO-like register architecture that would speed up
function calls.
I am not saying X86 designers have not been incredibly innovative in their quest to get 40 year
old 8 bit X86 instruction set to perform at 64 bits with RISC parallelism. Hats off to the teams and
teams of bright engineers that build these super complex CPU’s we take for granted every day. My
point is that RISC was intended to increase parallelism by reducing instruction complexity, trading
seldom used complex instructions for faster and more efficiently executed more orthogonal instructions.
Not the 8086 memory model. Look up “memory consistency model”—ARM provides really weak guarantees, and it sucks.
True until about v7/v8. And not nearly as bad as Alpha, which made multi-core programming incredibly difficult.
There’s a really good 2-hour “atomic weapons” talk by Herb Sutter (I think) about consistency models from the POV of a programming language.
Intel made a breakthrough that put them out ahead with out-of-order execution that really works and allows simultaneous execution of more than one instruction. Like multiple pipelines. It isn’t parallelism because it isn’t the same instruction with different data. It is more like threads. And running from fast cache nobody today beats it. Does anyone come close? The latest Rizen?
A key to RISC is the size of the register set. The research at Berkley (MIPS) and Stanford (SPARC) at the time showed optimal sizes for register sets. MIPS has 32 registers and SPARC had 32 (but not all general purpose). The current SPARC from Oracle has something like 70 to 700 64bit registers depending. ARM has 16.
ARM had the very cool conditional execution of any instruction which gets rid of all the short branches – though it does a NOP because it has to do something. (Not in Thumb mode?) The condition register is the same as 65C02 because it works and because it makes 65C02 emulation much easier.
You can manage subroutines yourself by pushing the program counter to the stack (stack pointer and PC are part of the register set) or use the pseudo-instructions in an assembler. And there is a quick interrupt mode that switches to a second register set so you don’t have to save all the register contents for this context switch. It is really fast. A modern ARM should be able to respond to an interrupt and write to a GPIO in less than five instructions or maybe 10ns. That’s 100MHz or more. The FIQ code is at the end of the vector table so there isn’t even a branch. GCC has features to use FIQ in C.
The way I heard it at Apple was that they visited Apple about the future, who said they should visit the WDC, where they saw that Mensch was basically doing the 65C816 alone PLUS that the 816 was a really stupid mashup that had no future, and on the way home in the plane they decided they could do something better themselves. And they were right.
Rumor had it that a prototype Apple IIxxx was made by the Huston brothers using the first ARM and it performed really well, it also emulated 65C02 fast enough to run all the legacy code. But Jobs did not want any more improvements to the Apple II line as the Mac was the centerpiece (though all the money came from Apple II sales at the time).
Of course, later Apple went back to the ARM for Newton and the rest is history.
“Several factors influenced this decision. One was a visit to the company that made the 6502, where they realized that one person was working on the next version of this CPU.”
And now it does take a team, and they still screw up.
6502 versions have bugs too, different versions different bugs.
A modern processor is much more complex and incorporates a lot more than an old 8 bit chip. That’s the reason they can be so fast.
They’re a bit more complex now, you know.
ARM is a really good design with the right components chosen to enable performance at a (relatively) low cost.
Not being a pure RISC design but supporting such things as LDM/STM (load/store multiple registers with one instruction), conditional execution and optional bitshifting before ALU operations meant it could give good performance using page mode DRAM without any cache. Those features also gave a very good code density for a RISC design.
Great stuff. It’s interesting to see how much more one can achieve with less. It changes the way one thinks.
When all you have is a hammer, every problem looks like a thumb!
Nice.
Thumb or Thumb2?
my solution to defeat kids with piezio lighters getting free Space Invader games (that was the main game) was to put a .1uf ceramic cap between the coin door and the slam switch.
worked really well!
too well, a couple of boyz tried to knife me, they “fell” off the mezzanine…
I don’t know anything about the wide band receiver and the conditions that required it described in this article, but it just strikes me as massive overkill.
I had to look up “slam switch”, so for the sake of my fellow dweebs who always dutifully begged their parents for quarters, it deters attempts to steal credits by banging on the coin door. Example: https://www.twistywristarcade.com/buttons-switches/1157-slam-switch-.html.
Wow, this woman is like the female version of Woz. I am really impressed. I am also a little sad that I had never heard of her before.
“I am also a little sad that I had never heard of her before”
Changed her name from Rodger, which could be why you’ve not heard of her before. Has caused confusion. Very brave to be open about it. Some would love to use that as an excuse to attack her. I say be yourself :)
Definitely a shining example for the trans community. More people really do need to know about her.
q.v. also Lynn Conway, who made numerous contributions, including VLSI with Carver Mead
https://en.wikipedia.org/wiki/Lynn_Conway
https://en.wikipedia.org/wiki/Mead_%26_Conway_revolution
SSI and LucasArts both had trans developers as well. In fact a suprising number of early computer companies between the 70s and 80s did, although it is often hard to figure out by names without knowing both their names and rough details of their employment history.
Women in the industry during that time aren’t THAT rare either, but sorting them out from the men with ambiguously gendered names can often be hard (more than a few guys named Kim or Robin or other things, which makes it easy to assume they were (fe)male until you see a team photo and realize your gender assumption was horribly wrong.
seems to me that (known or atleast internet famous) women in electronics having a Y chromosome isn’t exactly rare
Might be something to do with the high coincidence of transgenderism and autism in individuals, and the equally high coincidence of autism and being a geek, particularly in complicated stuff like CPU design.
If you are supportive of the trans community, a good place to start is not to use old names. They refer to it as “deadnaming”, and it’s used against them a lot.
There are practical aspects that must be considered when names change, as other threads have shown. When you do a historical search for Sophie Wilson, you simply will not get a complete picture unless you know her prior name as well. To say we’re not supportive if we want to know someone’s “deadname” ignores this reality.
My wife decided to take my last name when we married. That’s a family tradition she chose to follow. However, she has never said that nobody is allowed to use her maiden name, and in fact is required to give it on certain documents. The reality is that she simply changed her legal name. So did Sophie, except that she changed her first name rather than her last.
Historical names provide historical context. In Sophie’s case, there is a long and impressive history that is worth knowing, but cannot be known unless you understand that she used to go by a different name. I get that old names can be used as slurs against trans people, and I’m sympathetic to that, but it is patently ridiculous for you to imply that revealing her name change makes Mr. Collins a bigot.
No bigotry implied, just saying trans people prefer it if you don’t use their old names. Not everyone knows that, thus it’s worth saying.
I think that “using the old name” is very different than what Richard did. “Using the old name” is to insist in calling her “Roger” instead of Sophie, or use “he” instead of “she”, which, I think, is not the case.
As I said to the other commenter, no implication of huge fault, just saying that for most trans people not using the old name makes you a better ally.
The case for RISC vs CISC is true for simple assembly tasks. The minute you want to do anything more complicated than adding or loading from/storing to memory, RISC architectures tend to lag behind. CISC also has its disadvantages. The best approach is to combine a bit of both. But this debate was going on in the 80’s and possibly some part of the 90’s. Why are we still having it? ARM Today is not Pure RISC….a lot of RISC with some CISC. Even Intel x86 has a combination of RISC and CISC. I don’t get the point of this article at all….It’s like I’m reliving the 90’s all over.
ARM’s true advantage is that the hardware designs at the RTL level are optimized for low power consumption and it usually comes at a cost in Performance. If you look at the power consumption of the Cortex-A72s ARM’s most performant CPU’s they’re not that far behind Intel’s similarly performant Celeron’s.
ARM’s other advantage is its software IP selling model…..the fact that multiple vendors can sell microcontroller’s with the same Cortex-A/M/R CPU’s but different peripherals for a small royalty has revolutionized the industry and proven to be a very successful approach for both ARM and all the Semiconductor Vendors that sell ARM based microcontrollers.
Of course Sophie Wilson’s contribution to CPU design as a whole is admirable and should be highlighted…I just get frustrated when I see old ideas being rehashed (especially here at hackaday) as if they are new. Maybe it’s a sign that I’m getting old…
“What has been will be again,
what has been done will be done again;
there is nothing new under the sun.”
You know that if we had a new ideas only rule, there would be no posts. Legitimately new ideas are vanishingly rare. I haven’t done anything truly never seen before in my life, and I doubt most people here have either.
One of the most humiliating things about the ARMv1 is that it is eight years younger than the 68k, but in terms of Dhrystones it’s only 20% faster.
According to https://en.wikipedia.org/wiki/Instructions_per_second the 68000 runs at 2.188 MIPS at 12.5 MHz while the ARM2 runs at 4 MIPS at 8 MHz. This is quite impressive since the 68000 has about twice as many transistors according to https://en.wikipedia.org/wiki/Transistor_count
Also note that the 68020 runs at 4.848 MIPS at 16 MHz while having 190000 transistors (while ARM 2 has 30000). However, the comparison is a bit closer when comparing the die-area of the chips (ARM 2 has 30 mm^2 while 68020 has 85 mm^2).
I compared Dhrystones for a reason. ARMv1 does substantially less work (about 50%) per instruction than the 68k at the same time as it executes each instruction in roughly 20% of the time.
It takes approximately 800 68k instructions per Dhrystone, and 1600 ARMv1 instructions per Dhrystone.
The early advantages of RISC architectures were die size (i.e. cost), power efficiency, and ease of adding pipelining… but that’s genuinely more or less it.
It seems that the response was also for reasons, and made more useful points.
Are you just saying that a benchmark has the score of the benchmark, and that tells you about the benchmark? When I’m selecting a microprocessor, I definitely have to consider the whole design, and all of the tradeoffs, and I’d never even be looking at a benchmarked score outside of some use case for the number. Whereas how many instructions per clock it executes is directly relevant, I can just open the datasheet and look at the instruction set to see what that will mean for my code!
I’m saying that a raw measurement of MIpS is unilaterally the most useless metric possible.
Dhrystones are a terrible metric, but they are still less terrible than MIpS.
Dhrystones is definitely rigged. 68000 instruction set is almost RISC. The only difference between 68000 and RISC architecture is that the 68000 does memory to register ALU instructions, and memory to memory moves. Otherwise 68000 is RISC.
The main bottleneck with the 68000 is that it takes 4 (sometimes 6) cycles to access 16 bit words from memory. The 68000 can’t make up that huge gap.
“ The minute you want to do anything more complicated than adding or loading from/storing to memory, RISC architectures tend to lag behind.”
This is not true. All the early RISC processors were designed with high level languages, particularly compilers, as their primary means of being programmed. Read the seminal volume: “Computer Architecture, a Quantitive Approach.”
All processor designs from the 1970s were designed with the objective of being ever easier to program, because programmers are the most expensive component. So CISC processors ‘evolved’ under the belief that increasing the complexity of the instruction set would reduce the ‘semantic gap’ between assembler and high level languages. Because more work was being done by each instruction and hardware was more efficient than software; compilers would be easier to write and generate faster code.
It’s intuitive, but wrong. It’s primarily wrong because it’s hard for compilers to sort out which combination of the myriad of instruction forms produce the fastest/shortest code. What Hennessy and Patternson worked out was that the extra hardware wasted resources on features like that, which were hard to exploit, so they designed architectures based around features that were easy for compilers to exploit. The principle being: make the common case fast and the complex case work. These are:
1. Large numbers of registers (swapping data in and out of registers is a pain, as anyone who programs an 8-bit PIC knows).
2. Minimal instruction formats (easy to weigh the options if there are only 3 and simple to design hardware with little decoding).
3. Single length instructions (easy to look ahead if they’re all the same length).
4. Simple addressing modes.
5. Load/Store (simplifies the bus interface, simplifies compiler combinations).
6. Three address ALU operations.
The reduced decoding required in every area meant they could allocate resources and speed up the processor where it counted: where it would add processing bandwidth. Thus pipelining the CPU becomes relatively easy because everything is regular. 3 address ALU operations allow you to do some things 3x faster. Load / Store allows you to decouple the memory interface from the main CPU datapath.
That’s RISC.
Load/store make my brain itch. Where did it come from? Everything about architectures called it “single address”.
Skidmore. Hmm. Julian Skidmore. Something about Forth is emerging from the fog of the past….FIG? Forth Dimensions? MVP?
Hi!
Load/Store isn’t the same as single address. Single address means that there’s one address field in an instruction, but there could be multiple types of instructions that apply. The 6502, 6800 and 8-bit PIC are all examples of single address architectures.
Load/Store means the memory interface is decoupled from the main datapath, ie main register access and ALU operations. That way, CPU decode never has to determine if it needs to wait for external memory address generation and memory fetch cycles to complete.
And yes, I’m the designer of the DIY AVR-based Forth computer FIGnition. I sold about 1000 units and the latest source code has been pushed to github a couple of days ago (which includes the serial flash translation layer; a more efficient composite video driver than any arduino driver; a very compact floating-point library; bitmapped graphics over serial SRAM; a simple XOR-based blitter; user-interrupt support; stack checking and a few other nice features)!
RISC designs were at a severe commercial disadvantage as compare to the pre-CISC x86. The reason was memory cost. The pre-CISC x86 design had a much higher binary code density. This reduced both the quantity and speed of memory need to store and run x86 code as compared to RISC. This was a tremendous advantage for x86 until the late 1990’s when memory prices declined.
This is not true. Early RISC processor designers were very aware of the cost of memory, and cited it as one of the main rationales behind CISC. They judged that RAM was already cheap enough to support the lower code density.
RISC suffered commercially for one reason only: x86 was already so dominant no-one wanted to risk (sic) anything else. It’s the same reason why the 680×0 suffered compared with the x86 when it was clearly superior for business computing.
I agree. There’s a reason why ARM allows conditions on every instruction, and why the barrel shifter can be combined with almost every other instruction.
The C in RISC stands for “computer” not “coding”.
True.
Actually, if you watch the first video, Sophie Wilson says it stands for Reduced Instruction Set Complexity. This is new on me. The original PCW article where I first read about RISC (about the ARM1 no less) called it Reduced Instruction Set Computer; the Hennessy and Patterson book “Computer Architecture, A Quantitive Approach” says “Computer”; our computer architecture course in the late 1980s referred to the C in RISC as “Computer”.
It is Computer, and the other terms are incoreect. I see lots of Technical Journalism that makes these kinds of mistakes lately.
Register size isn’t the same thing as data bus width. A Motorola 68008 has an 8 bit data bus but 32 bit registers.
Which means the 68008 is a 32-bit CPU. Internally it has a 16-bit ALU, but this doesn’t define its architectural size. Consider the original Data General Nova. It was a 16-bit Computer despite it having a 4-bit ALU and datapath.
Hunting around I found this, intereating read. I was looking for the reference on the “no power supply” arm core running off the data lines
https://queue.acm.org/detail.cfm?id=1716385
And I didn’t see any mention of “Sophie” Wilson in it.
Sophie Wilson is transgender.
She is the “Wilson” in “Wilson and I were doodling processor designs on bits of paper…” and is the other half of the “we” thereafter.
How about some males heroes too? Discrimination is just that. Positive discrimination doesn’t exist and these ladies can weather the competition just fine.
Roger that
Almost all the hero pieces on here are about females, and Sophie has chosen to go through life as one. Telling a one-sided story doesn’t fix history or inequality. Telling a one-sided story is untruthful, and creates the inequality it aims to combat. Women don’t need biased reporting to look good. Implicitly suggesting they do by biasing the narrative is just a continuation of what went wrong all that time.
Then how do you suggest people address erasure?
That seems worth discussing, but also seems not to be a part of this discussion. You can’t correct women being under-represented in much of scientific history by under-representing men. Two wrongs don’t make a right.
On the way! We’ve got Richard Feynman up next Tuesday.
I don’t think she ever had to weather any competition, she was in a promoted group when she was doing this work, and is in a promoted group now that that has changed. It is somewhat of a reach to seek out discrimination in this particular story; if it had happened, it must have been to somebody else that we’re still not talking about.
We need someone in the U.K. to start “Wilsonfruit Industries” to directly compete with “Adafruit Industries” in the U.S. Competition is GOOD – and so is refreshing the historical role Women played in Science and Technology development. Primary Requirements: The Founder must be a Woman, and she must NOT dye her hair pink (for Marketing purposes). Any other hair color is acceptable ;-)
So, that rules out Sinead O’Connor?
B^)
Wouldn’t it be “Sophiefruit Industries”? Ada is a first name.
If it were “Wilsonfruit” people would be expecting the founder to be a volleyball.
From the BBC at 30 event (I was there!). Steve Furber gave a good talk about how they came to design the ARM.
Initially, they were going to go with Intel’s 80286. However, they didn’t like the bus architecture which they felt was too inefficient (the 6502 can do a memory fetch in a single cycle, but the 8086 took 4 and the 286 took 2 or 3). They went to Intel to ask them if they (Acorn or Intel) would be willing to produce a version with a more efficient bus architecture, but Intel rejected the idea. Then they decided to visit WDC (around that time reading up on the Hennessy and Patterson research papers, which became Sparc and MIPS) and were encouraged by the fact that basically a single bloke was developing a commercially successful 16-bit CPU. Aspects of the ARM1 architecture formed the basis of Steve Furber’s PhD (I was part of Amulet for a while).
The article is also technically wrong where it claims that the data bus width of the ARM1 was 16-bits. From the Ken Sheriff blog entry you can see it had 32 pins allocated to the data bus.
http://www.righto.com/2015/12/reverse-engineering-arm1-ancestor-of.html
How it is possible to write about the ARM chip and not mention the Archimedes and the RISC PC is beyond me.
There are many mistakes in this article.
If you want to know more you’d better see the links in this thread : http://www.stardot.org.uk/forums/viewtopic.php?f=41&t=9602&hilit=remarkable
I came here to say exactly this. The Archimedes pushed forward development of the ARM architecture, not mobile phones. Acorn didn’t just give up after creating the Master…
From what I’ve heard…
Modern X86 and X86-64Bit have one or more RISC cores under the pipe-lining control layer.
They have a translator bus above that so it can execute the expected CISC instructions and the RISC core(s) are hidden away from the end-user. Apparently those among the scenes (I’ll detail last) believe the RISC cores to have widely varied differences for the generation of CPU being targeted and thus would be difficult to impossible to make a universal compiler that’ll work on all RISC variants used in the Intel X86+X86-64 architecture.
To detail my sources, I’ve gathered this “Knowledge”* from: the Micro-code hacking scene(s), the Intel Management Engine hacking scene (BMC+IPMI hacking scenes) and other general Intel/AMD hacking scene(s).
*P.s. My mind can sometimes fail to remember/recall things correctly, so if there is something wrong/missing then corrections are welcome. ;-)
It’s not a bus, but a decode stage that translates the fetched opcode into native instruction format.
I thought that was true for older Pentium designs, but not the modern ones. The modern ones are based on a mobile chip, which had a simpler (in theory) design and ran CISC x86 code directly. Because the RISC stuff slowed execution down.
Question: was the BBC Micro and ARM her biggest contribution or was it the first ADSL modem?
https://www.eetimes.com/document.asp?doc_id=1180959
I soldered together an Acorn System One from a kit when I was in the Sixth Form at school in 1979. My first computer kit-build! My second was a Compukit UK101, also 6502-based, in 1980.
Indeed, which was featured on the second part of the BBC click programme that introduced FIGnition!
I remember the days when i played Repton on a BBC micro computer at school. Everyone was playing it for lack of a better choice. it had the most annoying BGM and it still gets stuck in my head sometimes.
I remember Martello Tower! The only BBC game we had at school. I finally complete it sometime in my 30s. On my own computer with an emulator, I’d left school by then.
The biggest omission in this article is that they assumed that during the time it took for mobile computing, and the fortunes of ARM to take off that nothing occurred in the RISC versus CISC battle. The truth is that the Microsoft PC might have used the Intel CPUs which were Complex Instruction Set Computers or CISC, but almost all the Workstation and Unix Server Business of Digital Equipment Corporation (DEC) ALPHA, Sun Microsystems ( SUN originally and now ORACLE ) SPARC , Tandem’s NON-STOP using first Silicon Graphics MIPS processors the after Hewlett Packard their PA-RISC. Lastly U would have to mention UP and their PA-RISC line of Unix Servers. There is a smaller part played by IBM with their POWER chips since move by Apple from RISC to CISC , but the recent focus on Linux is refocusing them on RISC.
I am not dismissing the importance ARM in the modern world driven by cellphones using their technology as they certainly deserve that credit. I am just concerned that the author of the article in efforts to praise Ms. Wilson overlooked the huge group of computer scientists that were also moving against the CISC Paradigm that totally ignored the ideas set forth by the father of computing Alan Turing that Instruction Sets should be as simple as possible.