A Laptop with an External Graphics Card?

It used to be that desktop computers reigned king in the world of powerful computing, and to some extent, they still do. But laptops are pretty powerful these days, and in our experience, a lot of engineering companies have actually swapped over to them for resource hungry 3D CAD applications — But what if you still need a bit more power?

Well, [Kamueone] wasn’t satisfied with the performance of his Razer Blade GTX870m laptop, so he decided to hack it and give it its own external graphics card.

Now unfortunately this really isn’t quite a simple as running some PCIE extender cables — nope. You’ll have to modify the BIOS first, which according to [Kamueone], isn’t that bad. But after that’s done you’ll also need a way to mount your graphics card outside of the laptop. He’s using an EXP GDC Beast V6 which uses a mini PCIE cable that can be connected directly to the laptop motherboard. You’re also going to need an external power supply.

[Kamueone] ran some benchmarks and upgrading from the stock onboard GTX870m to an external GTX 780ti resulted in over three times the frame rate capability — 40fps stock, 130fps upgraded!

Digital Light Processing, So Many Tiny Mirrors

Did you know there are a million little mirrors flickering back and forth, reflecting light within some modern projectors; like a flip-dot display but at the micro level? In his video, [Ben Krasnow] explains the tiny magic at work in DLP, or digital light processing technology with a scaled up model he constructed of the moving parts.

LCD projectors work much like old slide projectors. Light is shined through a transparent screen containing the image, which is then focused and enlarged through a lens. DLP projectors however achieve the moving image in a slightly different way. A beam of focused light is shined onto a chip equipped with an array of astonishingly small mirrors. When the mirror is flipped in one direction, it reflects the light out through the lens and creates a visible pixel. When the mirror is tilted the opposite direction, no light is reflected and the pixel is dark. All of these tiny moving parts are actuated by means of static electricity, and since a pixel can effectively only either be in an on or off state without any range of value in-between, the pixel must flutter at a rate fast enough to achieve the illusion of intensity, much like pulsing an LED to create a dimming effect.

In addition to slicing open the protective casing of one of these tiny micro-mirrored chips to give us a look at their physical surface under a microscope, [Ben] also built his own functioning matrix from tiles of mirrors and metal washers sandwiched around pieces of string. A wound electromagnet positioned behind each tile tilts the pixel into position when a current is run through the wire — although he didn’t sink the time needed to build out the full array in this manner (and we don’t blame him). If you do have the time and add in a high powered flash-light, this makes for an awesome way to shine messages on your roommate’s wall.

Continue reading “Digital Light Processing, So Many Tiny Mirrors”

Nerdalert: German TV Producers’ Amazing Vectorscope Animations

German weekend late-night comedy show “Neo Magazin Royale” has a bunch of super-nerds behind the screens in the production studio. This is apparently what they do when they’re (not) working: making test screens that render as multiple animations on their test equipment.

While others out there are limited to displaying cool graphics on oscilloscopes, these guys have vectorscopes and waveformer monitors. A vectorscope is like an oscilloscope in X-Y mode, but with one screen that decodes the color space and one screen for the audio (in stereo). A waveform monitor that plots out the brightness levels of a test image. Normal studio techs use these to calibrate their colors, brightness, and audio levels.

Apparently, these guys programmed a custom test screen that would: a) encode a small animation of a 20-sided die spinning around the show’s logo in the color channel b) encode the show’s logo in the left and right sound channels, and c) their production company’s logo in the screen’s brightness.

At the end of the video, the director Patrick (in the glasses) admits that they’ve spent about three months working on this project and everyone starts laughing. “And who gets anything from this? Nobody!” says the show’s host.

One way to rectify that, though. Post the source code!

[darNES] Stores Cached Netflix on NES Cartridge

Let’s play a quick word association game: Peanut butter…jelly. Arches…golden. NES…Netflix?  That last one sounds like a stretch, but the [darNES] development team had a Hack Day and a dream.  They started with cached Netflix data and ended up playing it on an ordinary NES. (YouTube link)

The data was pre-converted so that the video frames were stored as tilesets and stored in the ROM image. [Guy] used the NES memory mapper (MMC3) to swap the frames. [darNES] had originally planned to use a Raspberry Pi in the cartridge to handle the video conversion and networking, but had to change gears and make a static ROM image due to time constraints and resource availability.

Accessing the Netflix data is just like the days of yore – load the cartridge into an unmodified NES and hit the power button (they didn’t even need to blow on it!). A bare-bones Netflix gallery appears. You can move the white cursor on the screen with the NES controller’s D-pad. House of Cards was the choice, and true to form, the next screen shows you a synopsis with a still image and gives you the option to Play. Recommend is also there, but obviously won’t work in this setup. Still, it got a chuckle out of us. [darNES] admits that due to time issues they did not optimize the color palette for the tilesets. They plan to release more of the technical info this week, but have already given us some hints on their Hacker News thread.

Check out the videos after the break to see the video they fit onto a 256K NES cartridge.

Continue reading “[darNES] Stores Cached Netflix on NES Cartridge”

Retrotechtacular: The Early Days of CGI

We all know what Computer-Generated Imagery (CGI) is nowadays. It’s almost impossible to get away from it in any television show or movie. It’s gotten so good, that sometimes it can be difficult to tell the difference between the real world and the computer generated world when they are mixed together on-screen. Of course, it wasn’t always like this. This 1982 clip from BBC’s Tomorrow’s World shows what the wonders of CGI were capable of in a simpler time.

In the earliest days of CGI, digital computers weren’t even really a thing. [John Whitney] was an American animator and is widely considered to be the father of computer animation. In the 1940’s, he and his brother [James] started to experiment with what they called “abstract animation”. They pieced together old analog computers and servos to make their own devices that were capable of controlling the motion of lights and lit objects. While this process may be a far cry from the CGI of today, it is still animation performed by a computer. One of [Whitney’s] best known works is the opening title sequence to [Alfred Hitchcock’s] 1958 film, Vertigo.

Later, in 1973, Westworld become the very first feature film to feature CGI. The film was a science fiction western-thriller about amusement park robots that become evil. The studio wanted footage of the robot’s “computer vision” but they would need an expert to get the job done right. They ultimately hired [John Whitney’s] son, [John Whitney Jr] to lead the project. The process first required color separating each frame of the 70mm film because [John Jr] did not have a color scanner. He then used a computer to digitally modify each image to create what we would now recognize as a “pixelated” effect. The computer processing took approximately eight hours for every ten seconds of footage. Continue reading “Retrotechtacular: The Early Days of CGI”

Beating Super Hexagon with OpenCV and DLL Injection

Every few months a game comes along which is so addictive, players can’t seem to put it down – no matter how frustrating it may get. Last year one of those games was Super Hexagon. After fighting his way through several levels, [Val] decided that designing a bot to beat the game would be more efficient than doing it himself. Having played a few rounds of Super Hexagon ourselves, we can’t fault him on that front!

At its core, Super Hexagon is a simple game. Walls move from the screen edges toward a ship located near the center of the screen. The player uses the arrow keys to “orbit” the ship around a central shape. Avoid getting crushed by the walls, and you’re golden. However, the entire game board is constantly spinning, expanding, contracting, flashing, and generally doing things to disorient the player while ever more complex wall patterns move in to kill you. In short, Super Hexagaon makes Touhou bullet hell games look like a cakewalk.

The first step in beating the game is to capture the screen. [Val] tried Fraps and VLC, but lags of 2 seconds or more were not going to work. Then [Val] turned to DLL Injection. Super Hexagon calls the OpenGL function glutSwapBuffers() to implement double buffering. Every frame of the game is rendered in the background. Once rendering is complete glutSwapBuffers() is called to swap the buffers, and the process starts over again. [Val] changed the game code such that his own frame capture function would be called instead of glutSwapBuffers(). Once he was done capturing the game’s video buffer, [Val] then called the real glutSwapBuffers() function. It worked perfectly.

Now that he had an image, [Val] used OpenCV to process it. Although game is graphically very noisy, there are only a few colors used at any one time. It didn’t take much work to come up with an algorithm which would create a binary image of the walls and the ship itself.

step5[Val] cast rays from the center of each wall through the center of the screen. The ray which was longest before intersecting another wall would be the best escape route. This simple solution worked, but only for about 40 seconds. At that point, Super Hexagon would start throwing more complex patterns, and the AI would fail. The final solution was to create an accessibility condition which also took into account how much space was available between the various approaching walls. This new version of the AI was able to beat the game.

So was this a more efficient method than grinding through Super Hexagon manually? Since [Val] now knows all about DLL injection and OpenCV, we sure think it was!

Click past the break to see the [Val’s] bot in action!

Continue reading “Beating Super Hexagon with OpenCV and DLL Injection”

ATtiny85 Does Over The Air NTSC

[CNLohr] has made a habit of using ATtiny microcontrollers for everything, and one of his most popular projects is using an ATTiny85 to generate NTSC video. With a $2 microcontroller and eight pins, [CNLohr] can put text and simple graphics on any TV. He’s back at it again, only this time the microcontroller isn’t plugged into the TV.

The ATtiny in this project is overclocked to 30MHz or so using the on-chip PLL. That, plus a few wires of sufficient length means this chip can generate and broadcast NTSC video.

[CNLohr] mentions that it should be possible to use this board to transmit closed captioning directly to a TV. If you’re looking for the simplest way to display text on a monitor with an AVR, there ‘ya go: a microcontroller and two wires. He’s unable to actually test this, as he lost the remote for his tiny TV from the turn of the millennium. Because there’s no way for [CNLohr] to enable closed captioning on his TV, he can’t build the obvious application for this circuit – a closed caption Twitter bot. That doesn’t mean you can’t.

Video below.

Continue reading “ATtiny85 Does Over The Air NTSC”