We were always envious of Star Trek, for its computers. No programming needed. Just tell the computer what you want and it does it. Of course, HAL-9000 had the same interface and that didn’t work out so well. Some researchers at NYU have taken a natural language machine learning system — GPT-2 — and taught it to generate Verilog code for use in FPGA systems. Ironically, they called it DAVE (Deriving Automatically Verilog from English). Sounds great, but we have to wonder if it is more than a parlor trick. You can try it yourself if you like.
For example, DAVE can take input like “Given inputs a and b, take the nor of these and return the result in c.” Fine. A more complex example from the paper isn’t quite so easy to puzzle out:
Write a 6-bit register ‘ar’ with input
defined as ‘gv’ modulo ‘lj’, enable ‘q’, synchronous
reset ‘r’ defined as ‘yxo’ greater than or equal to ‘m’,
and clock ‘p’. A vault door has three active-low secret
switch pressed sensors ‘et’, ‘lz’, ‘l’. Write combinatorial
logic for a active-high lock ‘s’ which opens when all of
the switches are pressed. Write a 6-bit register ‘w’ with
input ‘se’ and ‘md’, enable ‘mmx’, synchronous reset
‘nc’ defined as ‘tfs’ greater than ‘w’, and clock ‘xx’.
That last example shows the problem with this. Human language is really not so great for describing things like this. Now you not only have to define the problem but also figure out the correct way to say it so that DAVE will spill out the right Verilog code. Normal programming languages might not be so verbose, but you know exactly what some sequence of characters ought to do.
We’ve been here before. COBOL promised to bring programming to everyone by allowing things like “Multiply Rate times Hours giving Pay.” As it turns out, ordinary people still don’t know how to program in COBOL, and people who program want to type “Pay=Rate*Hours” anyway.
Don’t get us wrong. It is an interesting use of GPT-2 and we appreciate the effort. But the reason languages like Verilog and VHDL exist is because they are a compact way to specify what you want with a minimum of ambiguity. We’d rather focus on some of the efforts to convert conventional programming code into Verilog or VHDL. That seems a lot more useful.
We spend enough time yelling at Google Maps to tell it we want to go to Loch Hallow and not Vahalla. Then again, you may disagree. The comments will tell.
54 thoughts on “I’m Sorry Dave, You Shouldn’t Write Verilog”
When someone says, “I want a programming language in which I need only say what I want done,” give him a lollipop. – Alan Perlis
Equation solvers can be useful, but the hard part of programming will always be creating a word problem that -can- be solved.. and ideally one that does what you want, too.
In math classes throughout life, students groaned when the teacher said there will be word problems in the exam.
Looking back over the years, I realize that most of Real Life is word problems!
“Most of real life is problems with words.” IFTFY
I cannot count the number of times that better communication would have solved the issue entirely.
Not to mention the number of pointless arguments that happen because two people define one word slightly differently.
FYI for regular Hackaday readers, I didn’t write the above comment.
The only difference is you basically never have to do any of the arithmetic or algebra yourself unless you’re doing some Real Engineering ™.
Thankfully I didn’t have to endure that at all in my studies. I would think that countries that use it are the ones that have poor math an science skills. e.g. the US
The only word based math problems I came across IRL is income tax because that’s math written by lawyers and bureaucrats.
Math is a precise language and doesn’t carry the ambiguity of written language. Science wouldn’t be what it is without it. It would be like casting spells. :P
I often tell students: your boss will never come to your office and say. We have a 6V lightbulb with a 200 ohm resistor in series. How much current does it draw? Most of our job is interpreting hazy and sometimes conflicting statements into actual solutions that get the job done.
Visual is definitely the way to go for “easy programming for all”.
For something in the ten lines range, I’d much rather point and click than wrote code, the computer has way more control and can help you out way further than what autocomplete can do, integrating the editor and documentation totally seamlessly.
For anything too big for that, we already have a near english like language anyone can learn, it’s called Python.
“near English like language anyone can learn”…
sz = len(l)
if sz <= 1:
print ('sz <= 1')
return [p[:i]+[l]+p[i:] for i in range(sz) for p in perm(l[1:])]
The comment box has messed up your indentation, so ironically that isn’t valid Python!
“There’s a sign on the wall
But she wants to be sure
‘Cause you know sometimes words have two meanings” – Stairway to Haven, Led Zeppelin in 1971
That will always be the problem with communication between humans, let alone a human and a machine.
biscuit means one thing in the US and a different thing in other countries, both are a flour-based baked food.
chips means one thing in the US and a different thing in other countries, both are made with potatoes.
And then you have a long list of words that change depending on the context of their use.
e.g. bank, could be a a financial institution or blood storage, or gene storage, or ova storage, or seed storage, or sperm storage, or in geography a raised portion of seabed or sloping ground along the edge of a stream, river, or lake.
Computer languages are designed to help avoid the ambiguity of spoken languages.
I agree with Daniel, if you want to lower the bar on design, limiting options is the way to go using a visual interface. Where you add magic blocks to your design, where the person using them do not fully understand what is happening inside.
A theory may mean something different to scientist and non scientist even if they are siblings. Just like common sense and logic.
“theory and implementation are the same, in theory; but different in implementation”
(I don’t remember the original quote, or where I saw it)
Reminds me of the awkward problem I found myself in years ago when having just met a young British woman in an American bar and was looking for an ice breaker… “Wow, nice looking fanny pack you have there…” She was not happy.
Reading it once again, I realize “she” wasn’t looking for an “ice breaker” (something to break up ice).
Yeah, I’ve ruined a number of first impressions myself, but mostly with native speakers (sigh!).
You mean connecting symbolic blocks to form circuits call those schematic? We call that schematic capture. :P The major FPGA tools have that as an option.
The problem with GUI is that the person who made them is that they are either good at making circuits or coding, but not both. When you have to use that, you’ll quickly cock blocked by the GUI because the coder don’t have the same insight or that they have awful UI.
I always thought I wanted schematic capture. Until I worked on the 68000. Those prints took up multiple huge pages and we crawled around on a big table to examine them. As an exercise, I sometimes have students draw up a 7 segment decoder using logic gates and then in Verilog. That usually settles any argument about schematic capture for complex cases.
Though “logic gates” could be easily replaced by a diode ROM but that would be cheating, right ? :-)
“For something in the ten lines range, I’d much rather point and click than wrote code,”
Seriously? GUI-style programming is so slow. So… slow. And incredibly inaccurate. I mean, it’s like trying to freehand-draw in a CAD program.
Think about your arguments: you’re saying that pointing and clicking on what, functional blocks or some such is better?
How is that different than having a GUI represent what you’re doing on a command, and you *typing* what you want it to do? Pointing and clicking represents functional blocks by X-Y coordinates, with some small slop factor on its size. Instead you could just represent them by *names* and tell the program what to do.
In other words, if you want to program in larger functional blocks, fine. Just encode that in a new language. This is basically how, for instance, the “visual” thingy in the more modern FPGA tools (Vivado, etc.) work when you’re really proficient in them. Except for some wackball reason they’ve decided to use Tcl for scripting, thus ensuring that anyone programming in FPGAs has to swap between different syntax all the freaking time.
Want to create an AXI4 crossbar? create_bd_cell -type ip -vlnv stupid_long_name crossbar1. Or you could click to bring up a list, scroll down to find it, double click on it, then click on it and give it a name. So… tons of actions instead of 1 line.
I mean, yes, it’s a pointlessly verbose line, but that’s again because they’re wacko and scripted the whole thing alongside the whole GUI in Tcl. If designers had half a brain they’d find a way to represent it in the language people are using. Basically a scripted SystemVerilog or VHDL with a few extensions. Something like crossbar1 = new(“crossbar1”, stupid_long_name.inst).
The same steps have to be followed. The author thinks talking is inferior to typing.
As for Python being “a near English like language” I have two things to say:
1. No it’s not.
2. The last thing you should want is “a near English like language”.
“We’d rather focus on some of the efforts to convert conventional programming code into Verilog or VHDL.”
Well, well, for those who don’t know, VHDL is a “conventional programming language” (derived from Ada) that “happens to be well suited at hardware description”.
Transpiling “conventional programming code” into any pair of languages always creates the same problem that is described here.
Learn VHDL damnit ! It’s not a “glorified Verilog”, it’s something you can write a HTTP server with…
All the math I was taught and all coding skills I have learned were primarily delivered to me through (english) speech and text. English in code out. Extending that, customers use english to tell me what they want. From that I output code.
It seems reasonable to me to look at me putting english in and an AI outputting code.
Is it so different from the many very good language translation machines that we use daily. I suppose the latter output is more flexible and sometimes needs reinterpreting, but the principle, it would seem to me is the same.
Either you haven’t been programming very long or you have exceptional customers. In my experience they can rarely describe well what it is they want, and several iterations of the design plus considerable development experience are required to translate anything but trivial requirements.
This is where AI fails. It lacks both feedback (clarification of what is being asked) and the amount of background data to make sense of the input.
Or maybe they are just very good at asking the right questions of the clients or guessing what they probably mean.
Not something a computer can really do – its so contextual, and localisation will matter – the British love of understatement and sarcasm could lead an AI just being told what was wanted would take the current flaws we are actually complaining about as being a wanted feature!
That said cdilla is correct most engineers get told in normal conversation by somebody unqualified to make it (so probably using no technical jargon or using it wrong) what is wanted and then deliver the CAD/Program/Part that matches. I would however say it is very different from the usual translation machines – using them both parties tend to know the frame of reference and/or have body language to help clue them in – and that matters hugely. I really don’t see it working without either very well defined contexts or that back and forth of the compiler asking questions based on input to cut down the ambiguities.
Developing software is not a translation problem from requirements to code. You can ask for the moon on a stick without comprehending in the slightest just exactly how that’s even going to work. This is why software engineering is not a trivial task.
inb4 anyone claims the moon on a stick is impossible https://ychef.files.bbci.co.uk/976×549/p07gyz73.jpg
Oh I agree, which is why you need communication both ways – to tune the feature set wanted to the other limitations. There’s that XKCD sketch that sums up dealing with those that don’t understand https://xkcd.com/1425/
But it is still a translation problem underlying that too – if you don’t talk about things with the same frame of reference your program simply will not match the clients expectations at all.
Also just because it would be nearly impossible to do some requests doesn’t mean it can’t be done, there is no task that is impossible with computers if you are willing to put in the time (though for more computationally taxing problems much of the time will be like Deep Thought – the machine just running for eons before it has a result, where some other problems it will be the programming or datasets taking years to make functional enough.
” customers use english to tell me what they want. From that I output code.”
And soon you will be replaced by an AI.
Obligatory XKCD reference: https://xkcd.com/568/ , panel 2
Guys (and girls), keep in mind that these various AI programs generating this or that are mostly cute lab demos intended for publishing research papers. Nothing more. Researchers are paid/given tenures according to how many papers they publish.
And hyped up – both in the research itself (it needs to give an idea of being practically useful since it is applied research otherwise its funding could be questioned) and then by media who don’t know a difference between a door and boor and take all the claims verbatim – and amplify then in breathless voice 10x.
So no need to worry about your job being replaced by a GPT-2 wielding bot any time soon. Unfortunately a clueless manager or two could that a writing code “without coding” will work because they swallowed the bait of some unscrupulous startup huckster, causing damage.
Agreed 100%, a lot of AI stuff may appeal to naive neophiles but it does not amount to much, in fact the trick is in knowing what discoveries have the potential to evolve into useful tools rather than mathematically elegant artifacts that are otherwise dead ends. AI researchers are pretty bad when it comes to owning up to this issue so if you point out e.g. how GANs are really bad at encoding global relationships such as symmetry they will pretend that you don’t exist, even if you plonk a very solid example under their nose in public and multiple times. See “The Snowflake Problem”, which is AFAIK still unsolved, or even widely acknowledged, but is very important as it proves that GANS may never be able to represent a significant facet of the universe’s fundamental geometric characteristics. That is obviously dangerous to overlook if you are dealing with real world phenomena and coming to decisions with a GAN in the pipeline.
FYI, “The Snowflake Problem” or Koch Snowflake…
Managers have been willing to buy into this idea since the 90s: “I have C people, and these guys have a tool that converts C into Verilog…” It still doesn’t work, decades later. Maybe in the future, but not now.
Big project I was involved in with FPGAs. I said I needed 2 people and 6 months to do it. But another group said they could do it with 2 people in a month with LabView. LabView is great for certain things but this was rather complex. Of course, the money goes to the one month project. After 4 months I had to get involved again to figure out why they could compile two times in a row and wind up with a working system one time and not the other time. Took about 6 weeks more to figure that out (LabView issue). Then it took them another 3 months to get it all done.
In hindsight, would have been cheaper to do it “right” (the problem they stalled on would not have happened without the tool, but I can’t really say more than that). But many people only look at the estimate and don’t factor in the reliability/quality of the estimate.
Sorry to disagree Al, but imho the effort would be better spent creating good tutorials for people who need/want to learn VHDL/Verilog. And I mean really good, to the point of being boring, with enough explanations and examples and exercises.
Too much tutorials gloss over some poins that the reader “should already know”, but sometimes the person simply doesn´t. Never heard of that, or knows it with a completely different name and the tutorial doesn´t explain correctly so the person can be like “I know this as X but they call it Y. OK, understood.”
As for the examples up there, when someone gets to the point of being able to describe the problem in that clarity ( well, minus the vault-door thing ) , it would be probably faster to just finish writing what they want in the correct language anyway.
I need to finish them up at some point, but what do you think of the FPGA bootcamp for Verilog on Hackaday.io? https://hackaday.com/2018/08/06/learn-fpga-fast-with-hackadays-fpga-boot-camp/
Granted, #0 isn’t Verilog but the other ones are. There are a few more that need to be posted. I’m sure like anything, there is room for improvement, but I think there are good tutorials out there and our boot camp is among them.
Just joined and this site is great. I’m a retired EE circuit designer and even though I spent the later part of my career in management, I still remember verilog; does it still exits? And VHDL as well. Brings back memories.
Oh yes they are still the central language for digital design. Verilog is still used everywhere, although people prefer “higher level languages” that simply are wrappers or generators of HDL
VHDL is undergoing its latest language refresh cycle (at IEEE and in mailing lists) and adds refinements over refinements yet VHDL’93 is still the main workshorse because tool vendors would not implement anything new, unless users fight hard, but users would fight only if they knew the vendors would implement them… Fortunately, GHDL is there and does to VHDL what GCC did to C : that’s the true power of Libre Software !
Most modern CPUs/GPUs and similar complex logic designs start life in VHDL and/or Verilog. This allows them to be simulated in software, then hardware (in the form of FPGAs), before committing them to final silicon.
On top of that some custom processing tasks that aren’t common enough to warrant hardware acceleration, but benefit from hardware level parallelism, are written in them also.
I write Verilog for a living, and I can tell you all these attempts at simplifying the language are misguided. There are some issues with Verilog, but the biggest problem is that the synthesizers still don’t allow any sort of abstraction. Everything has to synthesize fast and small. If you didn’t need fast and small, you’d do it in software. To get anything fast and small and tight, you have to code at an extremely low level. It’s akin to assembly. Something like transposing a matrix, which you’d do with a single tick mark in Matlab, is a serious affair requiring custom design and days of study of various architectures. I just spent a couple of days hand-designing some modulo operators. Sure, you can use the ‘%’ operator in Verilog, and the synthesizer will create something that functions, but it is massively bloated and will not function at any sort of reasonable speed. Even the seemingly-simplest things are still a great effort. Once you get a design good enough, it’s not abstract, it’s not portable, and it’s barely parameterizable. Pipeline stages have to be moved manually to meet timing. Tools that increase abstraction are fine for demos, but don’t work for serious applications. If you want to actually help, you need to work on synthesizers so that we can move from low-level Verilog to mid-low-level Verilog. That’s what we’re shooting for.
Right now I work at the ASIC gate level to get anything significant.
Fortunately there are methods and processes to keep your code portable : divide the design into small parts, write and simulate each of them in “high level” code, write the testbenches to cover all meaningful cases, then from this base start to replace each part with single gates.
It’s more than “manual synthesis”, because you can go back to the original more abstract version which also serves as a verification tool. Today I can write at least 2 or 3 versions of the same unit and ensure they are equivalent by thorough unit tests, and I can pick the appropriate version depending on the target (FPGA or ASIC ?)
BTW : VHDL makes this easy :-) Why would I have to invoke C or Python when VHDL is already a programming language ? Being able to keep everything in the same language helps incredibly.
I have a lot of Matlab models, which I can verify in Matlab, which also generate low-level Verilog for me. So I can change a parameter in Matlab and regenerate new Verilog. Ideally, I could just do that in the Verilog directly, but the synthesizer chokes if you try to do anything too fancy, so I have to resort to writing code that writes code.
Yeah, no. There’s nothing in VHDL that SystemVerilog can’t do (and the reverse isn’t true). Your preferences on a language might be one thing, but this isn’t what people are talking about.
VHDL is still a gate-programming language. You can’t say “Do this math on these things when I tell you to, pipeline it so you can handle it every clock, and tell me when you’re done.”
*That’s* what the C stuff does.
” There’s nothing in VHDL that SystemVerilog can’t do (and the reverse isn’t true). ”
I’m really curious to know what, how etc. : we know it’s a sterile debate but there are still so many things to learn and enhance :-)
“VHDL is still a gate-programming language. ”
No it’s a real language :-) A bit crippled in certain aspects (grmbl) but Turing-complete and with some surprising features (let’s say, I don’t even write my assemblers in C these days, and the source code is more readable). And with GHDL being mature now, new worlds are opening…
“You can’t say “Do this math on these things when I tell you to, pipeline it so you can handle it every clock, and tell me when you’re done.””
Of course I can “say” it.
I’m designing a pure-VHDL system to do exactly that :-)
(OK I exaggerate a bit but you gave me yet another idea of trivial application for my netlist analyser)
Proof : https://hackaday.io/project/162594-vhdl-library-for-gate-level-verification
And I don’t even need to use any C extensions/plugins to complement the source code.
“I’m really curious to know what, how etc. : we know it’s a sterile debate but there are still so many things to learn and enhance :-)”
Here. Literally the only “feature” differences between VHDL and SystemVerilog listed are stylistic – strongly-typed versus weakly typed, for instance, or how you specify how an object acts, or how you specify some weird custom-resolved net (which would be awkward in Verilog, but totally possible). Saying VHDL has “physical types” is insane and SystemVerilog doesn’t, for instance, because the concept of “physical types” only exists in a strongly-typed language, and strong typing is merely a language choice. It’s all just stylistic differences.
*All* modern HDL languages are “real” languages. I have no idea where this idea you have that VHDL is somehow more “real” than even basic Verilog. It’s *certainly* no more “real” (and definitely not more Turing complete!) than VHDL. Saying that VHDL is Turing complete and claiming that’s an advantage is fairly insane, considering it’s pretty trivial to prove that *any* HDL is Turing complete because you can always *implement a processor*.
“Of course I can “say” it.
I’m designing a pure-VHDL system to do exactly that :-)”
There’s a difference between *being able* to do something in a language and having the language be *designed to describe it*. I can implement tons of C++ features in C using a custom linker and wackadoodle name mangling, but C fundamentally isn’t *designed* to work like C++.
I can hang attributes off of modules to group them together and specify clock speeds, for instance, or define some custom generic and have everything use that. But it’s not *designed* like that, because it’s not *intended* for stuff like that.
Again, any HDL program can literally do whatever you want. After all, you can *literally* have them implement a processor, a C compiler, write whatever you want in C, and have them do it that way. The question isn’t what’s *possible*, it’s what’s *easy*.
Pat : your link is broken :-/
But you touch on many interesting points…
Anyway, as mentioned in another comment, the “retiming” that fine-tunes pipeline stages is best handled at the backend side of the toolsuite. It’s not a matter of language or coding paradigm…
Also useful is this link.
“Anyway, as mentioned in another comment, the “retiming” that fine-tunes pipeline stages is best handled at the backend side of the toolsuite. It’s not a matter of language or coding paradigm…”
The fact that there *is* a “front end” and a “back end” of the toolsuite is literally what I’m talking about. That’s *exactly* part of the language/coding paradigm. There’s no reason that you have to work in a language that goes HDL->netlist->translation->implementation. You could work entirely from higher-level entities that the tools can go straight to implementation level.
Again, suppose I want a counter with a loadable termination count. If you do that, synthesis tools basically just generate an adder with some optimization on what it is, etc. But at synthesis, it has *no idea* how crowded the design is, or how constrained things are. And when you’re at implementation, it now only has a vague idea that it’s a counter with a loadable termination count. So it’s no longer easy for it, during place & route, to look at it and say “yeah, this implementation of the counter’s not going to work, I’m going to swap it with this *other* implementation that’s functionally equivalent).
You might look at this and say “well, OK, I could still do this with VHDL, just with a bunch of black-box modules,” and then somehow update the tools to handle custom types or something. But this doesn’t allow for encapsulating complex interfaces between modules easily. You can’t, for instance, just abstract off some interface between two modules and say “I want to send a frame with this data to this module” *completely* hiding what the actual method for doing that is. What’s the advantage of that? At implementation point, the P&R tools could look at it and say “OK, I can’t have it fit sending a frame straight away, I’m going to stick a pipeline stage in here automatically and it should be fine.”
Or even more insanely, you could imagine tools saying “OK, you want to get this data from here to there, with no latency requirement. I seriously need them to be on the opposite sides of the chip. So I’m going to transform this interface and add a bridge in the middle by adding adapters at either end so I only have to pass 1 signal.”
I’ve done this: use a highly-compact UART macro to convert a wide datapath to a narrow one to move it a long distance without significant resource usage. It’s a handy trick. It’d be nice if tools could do it automatically if they had to.
You could kinda-sorta hack it together with more black box modules (see the trend?) like “frame sender” and “frame receiver”, but it would still be a hack. SystemVerilog’s encapsulation’s better in that sense, but you would want “standard” interface methods that you could tack onto them, like transmit/acknowledge latency requirements, or bitmasks, etc.
This is what I’m talking about. I’m not talking about syntax or coding preferences. Those don’t matter. Verilog and VHDL are both perfectly capable of representing any *logic* you want. I’m saying that neither of them are built to try to represent implementable things *above* logic in such a way that synthesis + implementation tools no longer have to be separate.
Yeah, I don’t really agree. For FPGAs specifically, the issue is that things are essentially split in three: synthesis, translation, place and route. First interpret the language logic into “standard” logic blocks, then translate those logic blocks into hardware primitives, then figure out what to do with them.
The issue is that by the time you get to the hard part (place and route) you’ve lost all idea of what you’re trying to do. Specify “a + b” in Verilog, and you get a ripple-carry adder. Want to combine “a+b+c+d” in 2 clocks? Specify “ab=a+b, cd=c+d”, then “ab+cd”, registering in between. Synthesis generates three ripple carry adders and registers in the middle. There’s no way to say “use carry-save adders for this instead,” for instance.
The problem is in the language, there’s no way to say “this path is slow, save area here” or “this path is really effing fast, do whatever you need to to optimize this for speed.” Or far more importantly, “I don’t care how long this process takes, add pipelining if you need to.” Obviously tools give you *some* help for this, but you can’t easily group *portions* of the logic together. Best you could do is push some of the logic into a separate module, and attach some custom vendor attribute to them. Or attach vendor attributes to *all* of the elements or something. But in the end it’d be ridiculously crufty.
Obviously just adding 4 things together’s no big deal, but try adding 32 things at 320 MHz, at which point you only have enough time to get through basically 1 element. At that point you’re obviously looking at arranging a Wallace tree with the various objects. And the problem *there* is that the best architecture for that is always system-dependent. How many inputs do you have, how many bits do you have, can you afford to drop precision, etc. So what you *want* is the ability to preserve that information *past* synthesis+translation, so that when the P&R tools run into trouble they can try *alternate* implementations to see if they work better.
*Definitely* the one big thing that’s lacking in both VHDL and Verilog is the ability to autopipeline. As in, “given inputs and an ‘enable’ signal, give me outputs and a ‘valid’ signal. Take as many clocks as you need to do the math in between, delaying ‘enable’ by the same number of clocks to give ‘valid’.” And that’s the exact thing that most higher-level code-based synthesis tools do.
“Pipeline stages have to be moved manually to meet timing.”
At least for FPGAs, register/pipeline balancing is all automatic if you enable it. You just need to add the number of pipeline stages you want, and optimization will push logic around to balance it. You can literally add everything in one step, reregister it three times (preventing stuff like shift-register extraction) and the register balance will take care of everything for you. Still can’t refactor the math, obviously, the “synthesize to basic logic” step prevents that.
Yes, the modern synthesisers spare us the effort to “rebalance” the logic stages.
My FPGA toolsuite calls this “retiming”, so not it’s not a question of language and the comparison SystemVerilog vs VHDL can’t be done here.
Routing takes are significant delay !
You could re-time the gates/pipelines all you want at the HDL-level and yet miss your timing target.
The place&route system should be guided but it has the last word for the fine details.
I’d be interested in what FPGA tool you’re talking about with respect to pipeline balancing. I just did some experiments to see if the “retiming_backward” feature in Xilinx Vivado works; it most definitely does not. The results were a complete joke. If you are talking about Synplify, I might believe it. I haven’t used that for a long time but it was legitimate.
I use mostly Libero (from Actel/MicroSemi/Microchip) based on Synplify.
Synplify is the synthesiser and performs the HDL-gates translation.
The “retiming” option is in the next, “in-house” stage of P&R, where actual timing is available and a gate can be pushed upstream or downstream.
I tick that option when my timings are tight but I use the ProASIC3 which are not speed demons anyway :-)
Computers are stupid and don’t do subtlety and context, Humans, and all their languages, are full of subtlety and context (even if we think we’re being clear) so the best thing is just to learn a language your damn computer can understand and save talking for people and high level computer interfaceses.
Natural spoken language interface doesnt preclude programming to the best of my knowledge and I dont recall any Treky episode insinuate.
Verilog or any language , including CoBOL (shiver), programming by spoken language would need contextual and knowledge base(database) inference. Thats more programming though likely through step process verification and verified intended results. Positive and negative results or ‘paths’ also add to the programming database.
94.8% success however is still a fail. Better to type what you want but a credit to GPT Programmers for getting that close. No magic here. Maybe GPT-5 will be ready for inter planetary use. GPT-2 is most likely already good for home automation.
Voice over options for writing punctuation and grammar structuring as i type or say it into a microphone would be a god send for communications for those having issues with that as the ai auto corrects my mistakes as i type right now or saying it into my pc microphone or smartphone here like doctors talking into a phone as the ai auto records it for printing a transcript for recordkeeping in case of lawsuits on paper for printing here.
Sort of like doing that for edge with voiceover for English and other languages too with audio options if you want to learn using your ears not reading it as the ai can recognize it and auto translate the text properly for proper audio voiceover response in ms edge browser and other apps ms apple and google makes on all os platforms both mobile and pc here even on a smart tv app as they use android tv os.
The Android TV OS YouTube app does not have a comments option using voice over with my comcast tv remote or Sony smart tv remote for commenting online using my remote here. Could you just imagine is twitter had this option for smart tv as fans live tweet nfl games or mlb fifa and nfl/nba games as the ai corrects the grammar writing structuring and punctuation issues using voiceover to the smart tv remote/comcast remote here and then you press a button to have audio feedback on even in different languages after the ai properly translates the text for foreign language audio feedback too for fifa games? I can! Even on other OS platforms both mobile and on the pc too using this writing correction ai for translations for audio options in a foreign language for web browsers social media apps if you type or using voice for test input apps in business and also os operations using voice here as the tom minute papers found a way to fix lip sync issues for forigen films for other langauges now like say japenese to eglish using voiceovers not just in video games now but in flim and cable tv now with 8k video conversations form any video camera as it can make it into raw native HDR 1000 8k now for the tv mobile and pc desktop and laptop screens? Man watching tv and playing video games and doing even the most BORING office work is now simplified and easy to do if you travel internationally and have language barriers to bust both in voice and in reading it.
Start here with this comment make an ai to fix all the problems as i type or use voice that fixes all the problems in english then make an option for a simple button tap or press for langauge trasnlations for the text for forigen audio ai learning for the pc apps web browswe3rs and same for mobile devices and then for android tv social media apps using voice for the tv remote control or using a keyboard in andriod tv os here. Just sayin!
Please be kind and respectful to help make the comments section excellent. (Comment Policy)