Unless you’re bit-banging a CRT interface or using a bunch of resistors to connect a VGA monitor to your project, odds are you’re using proprietary hardware as a graphics engine. The GPU on the Raspberry Pi is locked up under an NDA, and the dream of an open source graphics processor has yet to be realized. [Frank Bruno] at Silicon Spectrum thinks he has the solution to that: a completely open source GPU implemented on an FPGA.
Right now, [Frank] has a very lightweight 2D and 3D engine well-suited for everything from servers to embedded devices. If their Kickstarter meets its goal, they’ll release their project to the world, giving every developer and hardware hacker out there a complete, fully functional, open source GPU.
Given the difficulties [Bunnie] had finding a GPU that doesn’t require an NDA to develop for, we’re thinking this is an awesome project that gets away from the closed-source binary blobs found on the Raspberry Pi and other ARM dev boards.
Is it ok to use kickstarter to release existing work? I’ve thought about doing this myself with some projects to provide a bit of a retroactive paycheck as well as motivate cleaning it up for public release, but at the same time people release open source work out of the goodness of their heart all the time.
Well, I assume that they (two people) would be working full time on this project. According to their timeline, they would be done with the work for the main goal by may 2014. So they account for 2*6=12 man-months.
A 3d graphics specialist with 20 years of experience would most likely earn ~$150k per year if employed at a company (maybe not in NH, though…). So charging $200k for freelance work is not completely unrealistic.
Remember this? http://hackaday.com/2008/05/21/open-graphics-card-available-for-preorder/
Can’t find the HDL code. Was it ever made available?
I’m not certain it was ever *written*, let alone made available.
while i support their cause, development prices seem very high.
$20.000 for a 2D card only with functionality that many a student has implemented many times before.
Not 20,000… add a 0 200,000
\\short rant
Asking the public for money, giving only the vaguest of guarantees, and not even calling it an investment as that would bring in many legal obligations. The sort of attitude necessary to do such a thing rarely gets the attention of investors, publishers, or any other form of business backing for very good reason. All that is left is kickstarter and similar for them. Is it so surprising when people that have that sort of audacious attitude not only ask for money without fair exchange, but for bloated amounts of it?
\\end short rant
I do not know much about the subject, so I did some research. Yeah, this has been implemented before as you say. Here is a pretty good thesis paper on doing it: http://liu.diva-portal.org/smash/get/diva2:20165/FULLTEXT01.pdf
Another thesis paper, more on how to do it than how it has been done:
http://www.milkymist.org/thesis/thesis.pdf
Seems rather odd to ask 200k for an open source project that has already been done.
I wonder, did you read the two thesis papers you quoted?
The first thesis is basically theoretical work. He only got as far as displaying a picture on the screen. No actual hardware level description was implemented above this.
The second thesis is really impressive work and I am sure the guy already brought several years of computer graphics backgound into it. He managed to implement a SOC according to his plans, but ran into several problems. It seems that the current architecture has a memory bandwidth bottleneck in the texturing unit. This is not suprising as the quite arbitrary texel memory access and high latency SDRAM page reading are not really compatible.
I byself implemented a 3D graphics SOC on an FPGA (10 years ago!), and the complexity can not be underestimated in my opinion. It is relatively easy to get some graphics output, but getting to a scalable design as quite something different.
What could you spend $1,000,000 dollars on in one year that would solve all those problems without actually building any hardware?
One part of the answer is: Professionals.
I had the same reaction when I saw this Kickstarter. They’re not raising money to develop an FPGA-based GPU – it’s already done. It’s even listed as a product for sale on the Silicon Spectrum web site. They’re basically asking the world for $200,000 to open-source their already-exsiting 2D core. The KS page says the money will go to polishing the code and documentation – that’s a lot of polish! And if you want the 3D core, the world needs to pay them $400,000. Or $600,000 if you want bump mapping.
Small price to pay, for the knowledge that a key component of your pc has not been backdoored by the government of a world power at the hardware level. If there is anything to learned by the Snowden Leaks, it’s that Richard Stallman was right about Freedom orientated open software, but his logic applies also to the entire hardware/software stack, down to the silicon, be need open accountability and the capability to peer review every component of the machines we trust our banking, voting and personal information to on a daily basis.
I never liked the expression “if you have nothing to hide, you have nothing to fear” but times have changed, as has the tone of the conversation, if vendors have nothing to hide, there’s no reason their code can’t stand up to public scrutiny.
I don’t know that an open-source gpu is all that important, compared to a DOCUMENTED gpu that you can write open source software for. The rPi problem isn’t that the design is proprietary, it’s that the whole interface and “how to use” information is also proprietary.
Exactly. And you can thank a corrupt and broken U.S. Patent and Trade Office plus the greedy Trial Lawyers of America that feed off it for getting us into this mess.
So, how much is the actual FPGA going to cost that can run all of this code ?
My thoughts exactly. Small FPGAs are relatively cheap, but they go up in price rapidly with more complexity. Even if this design could run on something relatively affordable, It would still be an expensive part, perhaps doubling rPi’s or beagle bones from what those cost now.
I don’t think we’ll have proper FOSS(where the S is silicon) until we can order a small volume lithography and packaging like we can with PCB’s right now.
Until that becomes a viable option, no go alas.
And why would anybody care if the GPU is closed source, if you’re programming in OpenGL anyway ?
True. If you want to do graphics, OpenGL is designed for that. If you want to use a graphics chip for special, clever mathematical tricks, an FPGA would be more versatile and useful serving as an FPGA. I can’t see what middle ground this is good for.
So use OpenCL.
Why would anyone care if the graphics driver and software built for it are closed-source, if they output open standard file formats anyway?
Why would anyone care if the file formats are proprietary, as long as it’s an industry standard anyway?
Why would anyone care if there are myriad mutually-incompatible walled gardens with very restrictive licensing as long as it at least functions?
Do you see where I’m going with this?
No matter how far you regress along this line of reasoning, it all comes down to the same two things. The manufacturer / publisher will eventually drop all support, leaving you high and dry if you need to keep things together and running. And, if you want to use a product for a purpose the manufacturer never intended, you’ll be flat-out unable to do so. If you don’t really own what you purchase, and you’re barred from adapting or modifying it to your needs, you’re going to be left high and dry sooner or later. That’s not even touching on the possibility that security backdoors might be slipped into the products you buy — why do you think Congress passed over Huawei and ZTE?
An ideal computer would go beyond just the FSF’s goal of free software. The hardware’s design and manufacturing details should be out in the open and available under either a permissive or a copyleft license. Even if putting it together is beyond your skill or means, at least it can be scrutinized by the public, built upon, and kept clean.
Nice to see somebody address at least some of the issues here. I personally don’t have a problem with the manufacturers who go to an admittedly incredible amount of expensive to develop these products in the first place essentially deciding that they are going to refuse to sell them to me and instead only offer me the opportunity to pay a substantial sum up front to license or even only rent their property – PROVIDED nobody else has a problem with me finding the whole concept so anoying that I decide not to play their game and find another way to what I need to, even if it means running Free DOS on a stand alone 20 yr old machine with a VGA monitor and a graphics card built by hand on a breadboard if I am so stubborn I decide that I actually want to spend my time that way in order not to have to sign the EULA. This is what freedom for all parties means.
Closed everything is what put a big crimp in porting Android to the Dell Axim X50 and X51 PDAs. The X51V has a 640×480 screen plus a VGA video out.
Without all the specifications it wasn’t possible to fully support the video chip’s functions beyond basic 2D. Paul Burton got the audio working back in 2010. With all the information on the hardware an X51V would be a nice FroYo PDA but Gingerbread is too fat for it. So, dead project now.
Companies that make special purpose chips like to keep all the information locked up, even years after the chips are considered obsolete and have nothing in common with current products.
An open GPU that has good 3D functions would be a useful thing. The design could be licensed for any chip fab to make, with the proviso that any additional functionality added must be fully and publicly documented (no giant fees or NDAs to get the info) so that anyone can write software or design other hardware to use the enhancements.
The additions should fall under an otherwise separate license so they may remain manufacturer exclusive, but carry over the open documentation on any additions to the additions so that no matter how far along it gets, it all has to be openly documented. There could also be
For example, there’s the OpenGPU. Company A adds an expanded texture cache and must fully document it. Company A has an exclusive license to their cache addition. Company B then devises an improved pipeline for Company A’s texture cache and non-exclusively licenses it to Company A – that addition to the addition must also be open documented. Company C could then find another thing to add to the OpenGPU using Company B’s pipeline but with their part of the design as an open design for any other company to use.
The result could be a very wide array of OpenGPU based chips. Any additions put out under an open design license could be “plugged in” by any company so some very featureful GPUs could be built that are all open design aside from the original core.
Think of it like an operating system where the kernal is proprietary code that’s not free but anyone can have a look at all the source code for free. If they want to use it they have to use it as is and pay a license fee, but any modules or drivers etc they want to add – they can do as closed source, open source, charge thousands of dollars a copy or give it away – but they are 100% not allowed to have a closed API. They must provide all the information on how to use their additions.
Hardware or software, there would be some expensive versions, some middling price, and some very inexpensive with nothing but fully open/free additions – but all could at least run the same core software accessing the basic GPU or OS functions.
If someone does a fully open/free GPU core, then there could be some very nice GPUs that are very cheap, costing not much above the cost to fabricate the chips.
So they want 200k to release their existing product, which they already admit they made in response to demand a decade ago. -a demand which probably no longer exists?
400k gets the work that they are currently doing (and I assume currently being paid for) -3d graphics.
600k gets them to actually look at the work they have done already for paying customers and improve it.
But if they can actually manage a million then they’ll agree that everything before now was rubbish and fully re-write?
Or am I just reading this wrong?!
It’s interesting, open source hardware on FPGAs seems like such an elegant concept, but for some reason it never really took off. The concept has been around since at least 2000. There were numerous attempts in pushing it to the mainstream. Remember the freedom CPU? (www.f-cpu.org). But nowadays even http://www.opencores.com hardly seems to be active anymore.
I am afraid this open source graphics core will suffer the same fate as most of the other fpga base open source hardware projects in the past…
Maybe AMD/nvidia should just release some simplified barebone GPU with ‘known’ technology that can be NDA-free.
Because if you see the stuff that ALL vendors already know it’s quite a lot, and since they all know that stuff doesn’t need to be protected from the competition. And seeing ho much is already known that barebone GPU would not be that shoddy at all and could have a market.
But what would you do with the GPU? There are plenty of open-source CPU solutions out there. Do they affect you in any way?
Maybe it would be a better idea to reverse engineer common Nvidia or ATI gpus (like Geforce 2 MX, Rage 128 or Radeon 7000) with a scanning electron microscope, by taking a photograph of each layer. Those chips are old, so there is no problem with image resolution vs feature size. Since those chips are probably created from standard prerouted logic blocks, it would be relatively easy for a computer to create a logic diagram for the chip.
There are tons of old scrap card around and if someone needs a GPU, he can simply unsolder a chip and connect it to his project.
The GPUs you are mentioning are manufactured in a 180 nm process with ~30 million CMOS transistors, multiple metallization layers and possibly different device settings (tox, Vt) that you cannot even discern in a SEM. That is a completely different world from a 7000 nm NMOS 6502 CPU or similar chips from the 80ies.
Reverse engineering is probably more expensive than $200k. And then you would still need to verify the design, because it will contain flaws from the process. How would you even do that without a high-level model to compare to?
Interesting idea, but quite infeasible imo.
Also, unsoldering chips is also no easy matter. Most GPUs are BGA chips which cannot simply be resoldered…
edit: 6502 is from the 70ies…
isn’t this against the kick starter rules? it is a completed product they are already selling but are holding the source code hostage for $600,000+. does nobody else see the problem with this?
Many here are completely missing the point of this.
This is NOT some core you dump in your Pro Kiddie Gamer PC so you can play Call of Doody 9000 Xtreme Black Edition.
In fact this core is completely unsuitable for any game in the past 10 years. So stop whining about how it’s not going to compete with your $100 subsidized gamer card.
What this IS for is applications where you already have an FPGA, and it needs to have graphics output. Even more important would be in certain markets where code origination/tracability is a huge factor — no verification of a black box GBI driver is allowable, so those jellybean SoCs are out. This sort of situation is where your open synthsizable and parameterized graphics core shines.
Stuff like GPS navigation for commercial vehicles, aerospace, and so on. Where medium resolution basic 3D is completely sufficient for conveying information.
Another thing is how this will be physically implemented. I write FPGA code for a living. Francis says the demo board is a 90K LE Cyclone II (which is 90nm technology). Here’s the thing about fpgas: they are expensive, power hungry, and slow. The same code will only be able to run an order of magnitude slower when crammed into an FPGA. Closing timing is a different process here. You’re forced to using the resources already built into the silicon, you can’t just go draw up interconnects whereever you need them. I would guess that the internal core frequency is around 50-100 MHz.
90K LEs is a fairly pricey chip, probably around $100 just for the fpga. But if you already have one on your pcb, it’s a different story. If you were to re-implement the core in a modern Cyclone V or 7-series Xilinx chip you could probably achieve 200MHz core operation with some parts running at 400mhz.
The fact that this has a register-compatible VESA 2d implementation is very valuable. Depending on your application, this may be much more important than even the 3d part.
Mr. Bruno and his fellow engineers have run a company licensing out this very tech for the past decade and seem to have made a good go of it, so right there you know the market exists. Theprice they are asking is quite reasonable.
The freetards are selling themselves short here, feeling entitled to something that, well, isn’t. Hardware does not manifest itself instantly and at zero cost like software can be distributed. I encourage them to realize the potential for this core isn’t with desktop linux crowd. Sorry. You are not what it was intended for. Please relax your solipsism for a bit and give this project its due credit.
The freetards as you put it aren’t expecting anything.
It’s simple, if you want to open source then do it.
Don’t say you want a quarter of a million to open source. Or a million to improve your product.
Open source and reap the long term benefits, continued development at zero cost from a community.
Make money based on a support model, sell support and training after open sourcing the same way that other open source software companies do.
The situation is that open sourcing is most likely a way that inside of a few years the million dollar stretch goals can be realised for the company and for free.
The issue isn’t that freetards want something for nothing, the issue is this company trying to grab a million dollars up front AND grab the benefits of open source at the same time.
so, he did open source it after the campaign failed: https://github.com/asicguy/gplgpu
Anybody want to write this up?