There are adults out there driving who were born after Google Earth came out. Potentially distressing facts aside, those who were around to remember the magic of scrolling in and out with infinite levels of detail was an experience that burned into our brains. Perhaps still curious 21 years later, [Craig Kochis] dove into how vector maps work by implementing one himself.
Some standard helper functions convert latitude and longitude into Mercator coordinates (x and y coordinates on the screen). He used WebGL to draw a canvas, making the whole thing interactive on the webpage itself. The camera position and zoom level stored as a matrix that is used to transform the map projection. By grabbing polygons that describe the shape of various borders and then converting those polygons into triangles with earcut, [Craig] has a part of a country rendered to a screen. Continue reading “Choose Your Own Vector Map”
As the browser becomes more like an operating system, we are seeing more deep features being built into them. For example, you can now do a form of assembly language for the browser. Sophisticated graphics have been around using WebGL since around 2011, but some people find it hard to use. [Surma] was one of those people and tried a new method that is just surfacing to do the same thing: WebGPU.
[Surma] liked it better and shares a lot of information in the post and — oddly — the post doesn’t use WebGPU for graphics very much. Instead, the post focuses on using GPU cores for fast computation, something else you can do with WebGPU. If your goal is to draw on the screen, though, you need to know the basics and the post links to a site with examples of doing this.
Sometimes the best you can say about a project is, “Nice start.” That’s the case for this as-yet awful DIY 3D scanner, which can serve both as a launching point for further development and a lesson in what not to do.
Don’t get us wrong, we have plenty of respect for [bitluni] and for the fact that he posts his failures as well as his successes, like composite video and AM radio signals from an ESP32. He used an ESP8266 in this project, which actually uses two different sensors: an ultrasonic transducer, and a small time-of-flight laser chip. Each was mounted to a two-axis scanner built from hobby servos and 3D-printed parts. The pitch and yaw axes move the sensors through a hemisphere gathering data, but unfortunately, the Wemos D1 Mini lacks the RAM to render the complete point cloud from the raw points. That’s farmed out to a WebGL page. Initial results with the ultrasonic sensor were not great, and the TOF sensor left everything to be desired too. But [bitluni] stuck with it, and got a few results that at least make it look like he’s heading in the right direction.
We expect he’ll get this sorted out and come back with some better results, but in the meantime, we applaud his willingness to post this so that we can all benefit from his pain. He might want to check out the results from this polished and pricey LIDAR scanner for inspiration.
We keep seeing more and more Tensor Flow neural network projects. We also keep seeing more and more things running in the browser. You don’t have to be Mr. Spock to see this one coming. TensorFire runs neural networks in the browser and claims that WebGL allows it to run as quickly as it would on the user’s desktop computer. The main page is a demo that stylizes images, but if you want more detail you’ll probably want to visit the project page, instead. You might also enjoy the video from one of the creators, [Kevin Kwok], below.
TensorFire has two parts: a low-level language for writing massively parallel WebGL shaders that operate on 4D tensors and a high-level library for importing models from Keras or TensorFlow. The authors claim it will work on any GPU and–in some cases–will be actually faster than running native TensorFlow.
The documentation is a bit sparse but readable. You simply define the function you want to execute and the dimensions of the problem. You can specify one, two, or three dimensions, as suits your problem space. When you execute the associated function it will try to run the kernels on your GPU in parallel. If it can’t, it will still get the right answer, just slowly.
Over at [Truthlabs], a 30 year old pinball machine was diagnosed with a major flaw in its game design: It could only entertain one person at a time. [Dan] and his colleagues set out to change this, transforming the ol’ pinball legend “Firepower” into a spectacular, immersive gaming experience worthy of the 21st century.
A major limitation they wanted to overcome was screen size. A projector mounted to the ceiling should turn the entire wall behind the machine into a massive 15-foot playfield for anyone in the room to enjoy.
With so much space to fill, the team assembled a visual concept tailored to blend seamlessly with the original storyline of the arcade classic, studying the machine’s artwork and digging deep into the sci-fi archives. They then translated their ideas into 3D graphics utilizing Cinema4D and WebGL along with the usual designer’s toolbox. Lasers and explosions were added, ready to be triggered by game interactions on the machine.
To hook the augmentation into the pinball machine’s own game progress, they elaborated an elegant solution, incorporating OpenCV and OCR, to read all five of the machine’s 7 segment displays from a single webcam. An Arduino inside the machine taps into the numerous mechanical switches and indicator lamps, keeping a Node.js server updated about pressed buttons, hits, the “Lange Change” and plunged balls.
The result is the impressive demonstration of both passion and skill you can see in the video below. We really like the custom shader effects. How could we ever play pinball without them?