Rescued IMac G4 Restored And Upgraded With Mac Mini M1 Guts

Three abandoned iMac G4s, looking for a loving home... (Credit: Hugh Jeffreys)
Three abandoned iMac G4s, looking for a loving home… (Credit: Hugh Jeffreys)

The Apple iMac G4 was also lovingly referred to as the ‘Apple iLamp’ due to its rather unique design with the jointed arm on which the display perches. Released in 2002 and produced until 2004, it was the first iMac to feature an LCD. With only a single-core G4 PowerPC CPU clocked at around 1 GHz, they’re considered e-waste by the average person.

That’s how [Hugh Jeffreys] recently found a triplet of these iMacs abandoned at an industrial site. Despite their rough state, he decided to adopt them on the spot, and gave one of them a complete make-over, with a good scrub-down and a brand-new LCD and Mac Mini M1 guts to replace the broken G4 logic board.

The chosen iMac had a busted up screen and heavily corroded logic board that looked like someone had tried to ‘fix’ it before. A new (used) 17″ LCD was installed from a MacBook Pro, which required the use of a Realtek RTD2660-based display controller to provide HDMI to LVDS support. The new logic board and power supply were sourced from a Mac Mini featuring the M1 SoC, which required a 3D printed adapter plate to position everything inside the iMac’s base. Wiring everything up took some creative solutions, with routing the wires through the flexible monitor arm the biggest struggle. The WiFi antenna on the Mac Mini turned out to be riveted and broke off, but the iMac’s original WiFi antenna could be used instead.

Although some clean-up is still needed, including better internal connector extensions, the result is a fully functional 2024 iMac M1 that totally wouldn’t look out of place in an office today. Plus it’s significantly easier to adjust the monitor’s angle and height compared to Apple’s official iMac offerings, making it the obviously superior system.

Continue reading “Rescued IMac G4 Restored And Upgraded With Mac Mini M1 Guts”

TSMC’s Long Path From Round To Square Silicon Wafers

Crystal of Czochralski-grown silicon.
Crystal of Czochralski-grown silicon.

Most of us will probably have seen semiconductor wafers as they trundle their way through a chip factory, and some of us may have wondered about why they are round. This roundness is an obvious problem when one considers that the chip dies themselves are rectangular, meaning that a significant amount of the dies etched into the wafers end up being incomplete and thus as waste, especially with (expensive) large dies. This is not a notion which has escaped the attention of chip manufacturers like TSMC, with this particular manufacturer apparently currently studying a way to make square substrates a reality.

According to the information provided to Nikkei Asia by people with direct knowledge, currently 510 mm x 515 mm substrates are being trialed which would replace the current standard 12″ (300 mm) round wafers. For massive dies such as NVidia’s H200 (814 mm2), this means that approximately three times as many would fit per wafer. As for when this technology will go into production is unknown, but there exists significant incentive in the current market to make it work.

As for why wafers are round, this is because of how these silicon wafers are produced, using the Czochralski method, named after Polish scientist [Jan Czochralski] who invented the method in 1915. This method results in rod-shaped crystals which are then sliced up into the round wafers we all know and love. Going square is thus not inherently impossible, but it will require updating every step of the process and the manufacturing line to work with this different shape.

Nearly 30 Years Of FreeDOS And Looking Ahead To The Future

Blinky, the friendly FreeDOS mascot.
Blinky, the friendly FreeDOS mascot.

The first version of FreeDOS was released on September 16 of 1994, following Microsoft’s decision to cease development on MS-DOS in favor of Windows. This version 0.01 was still an Alpha release, with 0.1 from 1998 the first Beta and the first stable release (1.0, released on September 3 2006) still a while off. Even so, its main developer [Jim Hall] and the like-minded developers on the FreeDOS team managed to put together a very functional DOS using a shell, kernel and other elements which already partially existed before the FreeDOS (initially PD-DOS, for Public Domain DOS) idea was pitched by [Jim].

Nearly thirty years later, [Jim] reflects on these decades, and the strong uptake of what to many today would seem to be just a version of an antiquated OS. When it comes to embedded and industrial applications, of course, a simple DOS is all you want and need, not to mention for a utility you boot from a USB stick. Within the retro computing community FreeDOS has proven to be a boon as well, allowing for old PCs to use a modern DOS rather than being stuck on a version of MS-DOS from the early 90s.

For FreeDOS’ future, [Jim] is excited to see what other applications people may find for this OS, including as a teaching tool on account of how uncomplicated FreeDOS is. In a world of complicated OSes that no single mortal can comprehend any more, FreeDOS is really quite a breath of fresh air.

Uncovering ChatGPT Usage In Academic Papers Through Excess Vocabulary

Frequencies of PubMed abstracts containing certain words. Black lines show counterfactual extrapolations from 2021–22 to 2023–24. The first six words are affected by ChatGPT; the last three relate to major events that influenced scientific writing and are shown for comparison. (Credit: Kobak et al., 2024)
Frequencies of PubMed abstracts containing certain words. Black lines show counterfactual extrapolations from 2021–22 to 2023–24. The first six words are affected by ChatGPT; the last three relate to major events that influenced scientific writing and are shown for comparison. (Credit: Kobak et al., 2024)

That students these days love to use ChatGPT for assistance with reports and other writing tasks is hardly a secret, but in academics it’s becoming ever more prevalent as well. This raises the question of whether ChatGPT-assisted academic writings can be distinguished somehow. According to [Dmitry Kobak] and colleagues this is the case, with a strong sign of ChatGPT use being the presence of a lot of flowery excess vocabulary in the text. As detailed in their prepublication paper, the frequency of certain style words is a remarkable change in the used vocabulary of the published works examined.

For their study they looked at over 14 million biomedical abstracts from 2010 to 2024 obtained via PubMed. These abstracts were then analyzed for word usage and frequency, which shows both natural increases in word frequency (e.g. from the SARS-CoV-2 pandemic and Ebola outbreak), as well as massive spikes in excess vocabulary that coincide with the public availability of ChatGPT and similar LLM-based tools.

In total 774 unique excess words were annotated. Here ‘excess’ means ‘outside of the norm’, following the pattern of ‘excess mortality’ where mortality during one period noticeably deviates from patterns established during previous periods. In this regard the bump in words like respiratory are logical, but the surge in style words like intricate and notably would seem to be due to LLMs having a penchant for such flowery, overly dramatized language.

The researchers have made the analysis code available for those interested in giving it a try on another corpus. The main author also addressed the question of whether ChatGPT might be influencing people to write more like an LLM. At this point it’s still an open question of whether people would be more inclined to use ChatGPT-like vocabulary or actively seek to avoid sounding like an LLM.

First Hubble Image Taken In New Single Gyro Pointing Mode

After Space Shuttle Atlantis’ drive-by repair of the Hubble Space Telescope (HST) in May of 2009, the end of the STS program meant that the space telescope had to fend for itself with no prospect for any further repair missions. The weakest point turned out to be the gyroscopes, with of the original six only three functioning until May 24th of 2024 when one failed and couldn’t be reset any more. To make the most out of the HST’s remaining lifespan, NASA decided to transition again to single-gyroscope operation, with the most recent imaging results showing that this enables HST to return to its science mission.

Although the HST has operated with a reduced number of gyroscopes before, while awaiting its (much delayed) 2009 Servicing Mission 4, this time around it would appear that no such aid is coming. Although HST is still very much functional even after recently celebrating its 34th year in space, there is a lot of debate about whether another servicing mission could be organized, or whether HST will be deorbited in a number of years. Recently people like [Jared Isaacman] have suggested ideas for an STS servicing mission, with [Jared] even offering to pay for the entire servicing mission out of pocket.

While there is an argument to be made that a Crew Dragon is a poor substitute for a Shuttle with its big cargo bay, airlock and robotic arm, it’s promising to see at least that for now HST can do what it does best with few compromises, while we may just see Servicing Mission 5 happening at some point before that last gyro kicks the bucket.

Litter-windrow detections in the Mediterranean Sea. (Credit: ESA)

Mapping Litter In The Oceans From Space With Existing Satellites

Aerial drone image of a litter windrow in Bay of Biscay, Spain. Windrow width: 1-2 meters. (Credit: ESA)
Aerial drone image of a litter windrow in Bay of Biscay, Spain. Windrow width: 1-2 meters. (Credit: ESA)

Recently ESA published the results of a proof-of-concept study into monitoring marine litter using existing satellites, with promising results for the Mediterranean study area. For the study, six years of historical data from the Sentinel-2 satellite multispectral imaging  cameras were used, involving 300,000 images with a resolution of 10 meters. The focus was on litter windrows as common collections of litter like plastic, wood and other types of marine debris that float on the surface, forming clearly visible lines that can be meters wide and many times as long.

These were processed as explained in the open access paper in Nature Communications by [Andrés Cózar] and colleagues. As marine litter (ML) tends to be overwhelmingly composed of plastic, this eases the detection, as any ML that’s visible from space can generally be assumed to be primarily plastic litter. This was combined with the spectral profile of common plastics, so that other types of floating materials (algae, driftwood, seafoam, etc.) could be filtered out, leaving just the litter.

This revealed many of these short-lived litter windrows, with spot confirmation from ships in the area. Some of the windrows were many kilometers in length, with an average of around 1 km.

Although just a PoC, it nevertheless shows that monitoring such plastic debris from space is quite doable, even without dedicated satellites. As every day tons more plastics make their way into the oceans, this provides us with the means to at least keep track of the scope of the problem. Even if resolving it and the associated microplastics problem is still a far-off dream.

Human Brains Can Tell Deepfake Voices From Real Ones

Although it’s generally accepted that synthesized voices which mimic real people’s voices (so-called ‘deepfakes’) can be pretty convincing, what does our brain really think of these mimicry attempts? To answer this question, researchers at the University of Zurich put a number of volunteers into fMRI scanners, allowing them to observe how their brains would react to real and a synthesized voices.  The perhaps somewhat surprising finding is that the human brain shows differences in two brain regions depending on whether it’s hearing a real or fake voice, meaning that on some level we are aware of the fact that we are listening to a deepfake.

The detailed findings by [Claudia Roswandowitz] and colleagues are published in Communications Biology. For the study, 25 volunteers were asked to accept or reject the voice samples they heard as being natural or synthesized, as well as perform identity matching with the supposed speaker. The natural voices came from four male (German) speakers, whose voices were also used to train the synthesis model with. Not only did identity matching performance crater with the synthesized voices, the resulting fMRI scans showed very different brain activity depending on whether it was the natural or synthesized voice.

One of these regions was the auditory cortex, which clearly indicates that there were acoustic differences between the natural and fake voice, the other was the nucleus accumbens (NAcc). This part of the basal forebrain is involved in the cognitive processing of e.g. motivation, reward and reinforcement learning, which plays a key role in social, maternal and addictive behavior. Overall, the deepfake voices are characterized by acoustic imperfections, and do not elicit the same sense of recognition (and thus reward sensation) as natural voices do.

Until deepfake voices can be made much better, it would appear that we are still safe, for now.