A Peek Inside Apple Durability Testing Labs

Apple is well-known for its secrecy, which is understandable given the high stakes in the high-end mobile phone industry. It’s interesting to get a glimpse inside its durability labs and see the equipment and processes it uses to support its IP68 ingress claims, determine drop ability, and perform accelerated wear and tear testing.

Check out these cool custom-built machines on display! They verify designs against a sliding scale of water ingress tests. At the bottom end is IPx4 for a light shower, but basically no pressure. Next up is IPx5, which covers low-pressure ambient-temperature spray jets from all angles – we really liked this machine! Finally, the top-end IPx7 and IPx8 are tested with a literal fire hose blast and a dip in a static pressure tank, simulating a significant depth of water. An Epson robot arm with a custom gripper is programmed to perform a spinning drop onto a hard surface in a repeatable manner. The drop surface is swapped out for each run – anything from a wooden sheet to a slab of asphalt can be tried. High-speed cameras record the motion in enough detail to resolve the vibrations of the titanium shell upon impact!

Accelerated wear and tear testing is carried out using a shake table, which can be adjusted to match the specific frequencies of a car engine or a subway train. Additionally, there’s an interview with the head of Apple’s hardware division discussing the tradeoffs between repairability and durability. He makes some good points that suggest if modern phones are more reliable and have fewer failures, then durability can be prioritized in the design, as long as the battery can still be replaced.

The repairability debate has been raging strong for many years now. Here’s our guide to the responsible use of new technology.

Continue reading “A Peek Inside Apple Durability Testing Labs”

Embrace IPv6 Before Its Too Late?

Many hackers have familiar sayings in their heads, such as “If it ain’t broke, don’t fix it” and KISS (Keep it simple, stupid). Those of us who have been in the field for some time have habits that are hard to break. When it comes to personal networks, simplicity is key, and the idea of transitioning from IPv4 to IPv6 addresses seems crazy. However, with the increasing number of ‘smart’ devices, streaming media gadgets, and personal phones, finding IPv4 space for our IoT experiments is becoming difficult. Is it time to consider embracing IPv6?

The linked GitHub Gist by [timothyham] summarizes the essential concepts for home network admins to understand before making changes. The first major point is that IPv6 has a vastly larger address space than IPv4, eliminating the need to find spare IPv4 addresses. IPv6 assigns multiple addresses to the same interface. The 128-bit addresses are split into a 64-bit prefix assigned by your ISP and a 64-bit interface identifier. Using SLAAC (Stateless Address Autoconfiguration), clients can manage their own addresses. You don’t have to use SLAAC, but it will make life easier. The suffix typically remains static, allowing integration with a local DNS server.

Continue reading “Embrace IPv6 Before Its Too Late?”

The Cheap CNC3018 Gets A Proper Revamp

Many people have been attracted to the low price and big dreams of the CNC3018 desktop CNC router. If you’re quick, you can pick one up on the usual second-hand sales sites with little wear and tear for a steal. They’re not perfect machines by any stretch of the imagination, but they can be improved upon, and undoubtedly useful so long as you keep your expectations realistic.

[ForOurGood] has set about such an improvement process and documented their journey in a whopping eight-part (so far!) video series. The video linked below is the most recent in the series and is dedicated to creating a brushless spindle motor on a budget.

As you would expect from such a machine, you get exactly what you pay for.  The low cost translates to thinner than ideal metal plates, aluminium where steel would be better, lower-duty linear rails, and wimpy lead screws. The spindle also suffers from cost-cutting, as does the size of the stepper motors. But for the price, all is forgiven. The fact that they can even turn a profit on these machines shows the manufacturing prowess of the Chinese factories.

We covered the CNC 3018 a while back, and the comments of that post are a true gold mine for those wanting to try desktop CNC. Warning, though: It’s a fair bit harder to master than 3D printing!

Continue reading “The Cheap CNC3018 Gets A Proper Revamp”

Make Your Code Slower With Multithreading

With the performance of modern CPU cores plateauing recently, the main performance gains are with multiple cores and multithreaded applications. Typically, a fast GPU is only so mind-bogglingly quick because thousands of cores operate in parallel on the same set of tasks. So, it would seem prudent for our applications to try to code in a multithreaded fashion to take advantage of this parallelism. Or so it would seem, but as [Marc Brooker] illustrates, it’s not as simple as one would assume, and it’s very easy to end up with far worse overall performance and no easy way to fix it.

[Marc] was rerunning an old experiment to calculate the expected number of birthdays in a shared group of people using brute force. The experiment was essentially a tight loop running a pseudorandom number generator, the standard libc rand() function. [Marc] profiled the code for single-thread and multithreaded versions and noted the runtime dramatically increased beyond two threads. Something fishy was going on. Running perf, [Marc] noted that there were significant L1 cache misses, but the real killer for performance was the increase in expensive context switches.  Perf indicated that for four threads, the was an overhead of nearly 50% servicing spin locks. There were no locks in the code, so after more perf magic, the syscalls taking all the time were identified.  Something in there was using a futex (or fast userspace mutex) a whole lot.

Continue reading “Make Your Code Slower With Multithreading”

Comparing X86 And 68000 In An FPGA

[Michael Kohn] started programming on the Motorola 68000 architecture and then, for work reasons, moved over to the Intel x86 and was not exactly pleased by the latter chip’s perceived shortcomings. In the ’80s, the 68000 was a very popular chip, powering everything from personal computers to arcade machines, and looking at its architecture and ease of programming, you can see why this was.

Fast-forward a few years, and [Michael] decided to implement both cores in an FPGA to compare real applications, you know, for science. As an extra bonus, he also compares the performance of a minimal RISC-V implementation on the same hardware, taken from an earlier RISC-V project (which you should also check out !)

Utilizing their ‘Java Grinder’ application (also pretty awesome, especially the retro console support), a simple Mandelbrot fractal generator was used as a non-trivial workload to produce binaries for each architecture, and the result was timed. Unsurprisingly, for CISC architectures, the 68000 and x86 code sizes were practically identical and significantly smaller than the equivalent RISC-V. Still, looking at the execution times, the 68000 beat the x86 hands down, with the newer RISC-V speeding along to take pole position. [Michael] admits that these implementations are minimal, with no pipelining, so they could be sped up a little.

Also, it’s not a totally fair race. As you’ll note from the RISC-V implementation, there was a custom RISC-V instruction implemented to perform the Mandelbrot generator’s iterator. This computes the complex operation Z = Z2 + C, which, as fellow fractal nerds will know, is where a Mandelbrot generator spends nearly all the compute time. We suspect that’s the real reason RISC-V came out on top.

If actual hardware is more your cup of tea, you could build a minimal 68k system pretty easily, provided you can find the chips. The current ubiquitous x86 architecture, as odd as it started out, is here to stay for the foreseeable future, so you’d just better get comfortable with it!

Continue reading “Comparing X86 And 68000 In An FPGA”

A3 Audio: The Open Source 3D Audio Control System

Sometimes, startups fail due to technical problems or a lack of interest from potential investors and fail to gain development traction. This latter case appears to be the issue befalling A3 Audio. So, the developers have done the next best thing, made the project open source, and are actively looking for more people to pitch in. So what is it? The project is centered around the idea of spatial audio or 3D audio. The system allows ‘audio motion’ to be captured, mixed and replayed, all the while synchronized to the music. At least that’s as much as we can figure out from the documentation!

The system is made up of three main pieces of hardware. The first part is the core (or server), which is essentially a Linux PC running an OSC (Open Sound Control) server. The second part is a ‘motion sampler’, which inputs motion into the server. Lastly, there is a Mixer, which communicates using the OSC protocol (over Ethernet) to allow pre-mixing of spatial samples and deployment of samples onto the audio outputs. In addition to its core duties, the ‘core’ also manages effects and speaker handling.

The motion module is based around a Raspberry Pi 4 and a Teensy microcontroller, with a 7-inch touchscreen display for user input and oodles of NeoPixels for blinky feedback on the button matrix. The mixer module seems simpler, using just a Teensy for interfacing the UI components.

We don’t see many 3D audio projects, but this neat implementation of a beam-forming microphone phased array sure looks interesting.

Using Kick Assembler And VS Code To Write C64 Assembler

YouTuber [My Developer Thoughts], a self-confessed middle-aged Software Developer, clearly has a real soft spot for the 6502-based 8-bit era machines such as the Commodore 64 and the VIC-20, for which he has created several video tutorials while travelling through retro-computing. This latest instalment concerns bringing up the toolchain for using the Kick Assembler with VS Code to target the C64, initially via the VICE emulator.

The video offers a comprehensive tutorial on setting up the toolchain on Windows from scratch with minimal knowledge. While some may consider this level of guidance unnecessary, it is extremely helpful for those who wish to get started with a few examples quickly and don’t have the time to go through multiple manuals and Wikis. In that regard, the video does an excellent job.

VS Code is a great tool with a large user base, so it’s not surprising that there’s a plugin for using the Kick Assembler directly from the IDE. You can also easily launch the application onto the emulator with just a push of a button, allowing you to focus on learning and working on your application. Once it runs under emulation, there’s a learning curve for running it on native hardware, but there are plenty of tutorials available for that. While you could code directly on the C64 itself, it’s much more pleasant to use modern tools, revision control, and all the conveniences and not have to endure the challenges.

Once you’ve mastered assembly, it may be time to move on to C or even C++. The Oscar64 compiler is a good choice for that. Next, you may want to show off your new skills on the retro demo scene. Here’s a neat C64 demo with a twist. There is no C64.

Continue reading “Using Kick Assembler And VS Code To Write C64 Assembler”