At a time when practical graphical user interfaces were only just becoming a reality on desktop computers, Apple took a leap of faith and released one of the first commercially available mice back in 1983. It was criticized as being little more than a toy back then, but we all know how that particular story ends.
While the Apple G5431 isn’t that first mouse, it’s not too far removed. So much so that [Stephen Arsenault] believed it was worthy of historic preservation. Whether you want to print out a new case to replace a damaged original or try your hand at updating the classic design with modern electronics, his CAD model of this early computer peripheral is available under the Creative Commons license for anyone who wants it.
[Stephen] tells us that he was inspired to take on this project after he saw new manufactured cases for the G5431 popping up online, including a variant made out of translucent plastic. Realizing that a product from 1986 is old enough that Apple (probably) isn’t worried about people cloning it, he set out to produce this definitive digital version of the original case components for community use.
The great thing about word clocks is that while they all follow the same principle of spelling out the time for you, they come in so many shapes, sizes, and other variations, you have plenty of options to build one yourself. No matter if your craft of choice involves woodworking, laser cutting, PCB design, or nothing physical at all. For [Yasa], it was learning 3D modeling combined with a little trip down memory lane that led him to create a fully functional word clock as a rendered animation in Blender.
Inspired by the picture of a commercially available word clock, [Yasa] remembered the fun he had back in 2012 when he made a Turkish version for the Pebble watch, and decided to recreate that picture in Blender. But simply copying an image is of course a bit boring, so he turned it into an actual, functioning clock by essentially emulating a matrix of individually addressable LEDs using a custom texture he maps the current time to it. And since the original image had the clock positioned by a window, he figured he should have the sun move along with the time as well, to give it an even more realistic feel.
There’s no question that a desktop 3D printer is at its most useful when it’s producing parts of your own design. After all, if you’ve got a machine that can produce physical objects to your exacting specifications, why not give it some? But even the most diligent CAD maven will occasionally defer to an existing design, as there’s no sense spending the time and effort creating their own model if a perfectly serviceable one is already available under an open source license.
But there’s a problem: finding these open source models is often more difficult than it should be. The fact of the matter is, the ecosystem for sharing 3D printable models is in a very sorry state. Thingiverse, the community’s de facto model repository, is antiquated and plagued with technical issues. Competitors such as Pinshape and YouMagine are certainly improvements on a technical level, but without the sheer number of models and designers that Thingiverse has, they’ve been unable to earn much mindshare. When people are looking to download 3D models, it stands to reason that the site with the most models will be the most popular.
It’s a situation that the community is going to have to address eventually. As it stands, it’s something of a minor miracle that Thingiverse still exists. Owned and operated by Makerbot, the company that once defined the desktop 3D printer but is today all but completely unknown in a market dominated by low-cost printers from the likes of Monoprice and Creality, it seems only a matter of time before the site finally goes dark. They say it’s unwise to put all of your eggs in one basket, and doubly so if the basket happens to be on fire.
So what will it take to get people to consider alternatives to Thingiverse before it’s too late? Obviously, snazzy modern web design isn’t enough to do it. Not if the underlying service operates on the same formula. To really make a dent in this space, you need a killer feature. Something that measurably improves the user experience of finding the 3D model you need in a sea of hundreds of thousands. You need to solve the search problem.
One of the core lessons any physics student will come to realize is that the more you know about physics, the less intuitive it seems. Take the nature of light, for example. Is it a wave? A particle? Both? Neither? Whatever the answer to the question, scientists are at least able to exploit some of its characteristics, like its ability to bend and bounce off of obstacles. This camera, for example, is able to image a room without a direct light-of-sight as a result.
The process works by pointing a camera through an opening in the room and then strobing a laser at the exposed wall. The laser light bounces off of the wall, into the room, off of the objects on the hidden side of the room, and then back to the camera. This concept isn’t new, but the interesting thing that this group has done is lift the curtain on the image processing underpinnings. Before, the process required a research team and often the backing of the university, but this project shows off the technique using just a few lines of code.
This project’s page documents everything extensively, including all of the algorithms used for reconstructing an image of the room. And by the way, it’s not a simple 2D image, but a 3D model that the camera can capture. So there should be some good information for anyone working in the 3D modeling world as well.
If you own a 3D printer, you’ve heard of Thingiverse. The MakerBot-operated site has been the de facto model repository for 3D printable models since the dawn of desktop 3D printing, but over the years it’s fallen into a state of disrepair. Dated and plagued with performance issues, many in the community have been wondering how long MakerBot is still going to pay to keep the lights on. Alternatives have popped up occasionally, but so far none of them have been able to amass a large enough userbase to offer any sort of real competition.
But that might soon change. [Josef Průša] has announced a revamped community for owners of his 3D printers which includes a brand-new model repository. While clearly geared towards owners of Prusa FDM printers (support for the new SLA printer is coming at a later date), the repository is not exclusive to them. The immense popularity of Prusa’s products, plus the fact that the repository launched with a selection of models created by well known designers, might be enough to finally give Thingiverse a run for its money. Even if it just convinces MakerBot to make some improvements to their own service, it would be a win for the community.
The pessimists out there will say a Prusa-run model database is ultimately not far off from one where MakerBot is pulling the strings; and indeed, a model repository that wasn’t tied to a particular 3D printer manufacturer would be ideal. But given the passion for open development demonstrated by [Josef] and his eponymous company, we’re willing to bet that the site is never going to keep owners of other printers from joining in on the fun.
That being said, knowing that the users of your repository have the same printer (or a variant, at least) as those providing the designs does have its benefits. It allows for some neat tricks like being able to sort designs by their estimated print time, and even offers the ability to upload and download pre-sliced GCode files in place of traditional STLs. In fact, [Josef] boasts that this is the world’s only repository for ready-to-print GCode that you can just drop onto an SD card and print.
Some time ago, [Trammell Hudson] took a shot at creating a tool that unfolds 3D models in STL format and outputs a color-coded 2D pattern that can be cut out using a laser cutter. With a little bending and gluing, the 3D model can be re-created out of paper or cardboard.
There are of course other and more full-featured tools for unfolding 3D models: Pepakura is used by many, but is not free and is Windows only. There is also a Blender extension called Paper Model that exists to export 3D shapes as paper models.
What’s interesting about [Trammell]’s project are the things he discovered while making it. The process of unfolding an STL may be conceptually simple, but the actual implementation is a bit tricky in ways that have little to do with number crunching.
For example, in a logical sense it doesn’t matter much where the software chooses to start the unfolding process, but in practice some start points yield much tighter groups of shapes that are easier to work with. Also, his software doesn’t optimize folding patterns, so sometimes the software will split a shape along a perfectly logical (but non-intuitive to a human) line and it can be difficult to figure out which pieces are supposed to attach where. The software remains in beta, but those who are interested can find it hosted on GitHub. It turns out that it’s actually quite challenging to turn a 3D model into an unfolded shape that still carries visual cues or resemblances to the original. Adding things like glue tabs in sensible places isn’t trivial, either.
If you were to make a list of the most important technological achievements of the last 100 years, advanced medical imaging would probably have to rank right up near the top. The ability to see inside the body in exquisite detail is nearly miraculous, and in some cases life-saving.
Navigating through the virtual bodies generated by the torrents of data streaming out of something like a magnetic resonance imager (MRI) can be a challenge, though. This intuitive MRI slicer aims to change that and makes 3D walkthroughs of the human body trivially easy. [Shachar “Vice” Weis] doesn’t provide a great deal of detail about the system, but from what we can glean, the controller is based on a tablet and Vive tracker. The Vive is attached to the back of the tablet and detects its position in space. The plane of the tablet is then interpreted as the slicing plane for the 3D reconstruction of the structure undergoing study. The video below shows it exploring a human head scan; the update speed is incredible, with no visible lag. [Vice] says this is version 0.1, so we expect more to come from this. Obvious features would be the ability to zoom in and out with tablet gestures, and a way to spin the 3D model in space to look at the model from other angles.