You may be familiar with origami, the Japanese art of paper folding, but chances are you haven’t come across smocking. This technique refers to the way fabric can be bunched by stitches, often made in a grid-like pattern to create more organized designs. Often, smocking is done with soft fabrics, and you may have even noticed it done on silk blouses and cotton shirts. There are plenty of examples of 18th and 19th century paintings depicting smocking in fashion.
When it comes to peer-to-peer file sharing protocols, BitTorrent is probably one of the best known. It requires a client implementing the program and a tracker to list files available to transfer and to find peer users to transfer those files. Developed in 2001, BitTorrent has since acquired more than a quarter billion users according to some estimates.
While most users choose to use existing clients, [Jesse Li] wanted to build one from scratch in Go, a programming language commonly used for its built-in concurrency features and simplicity compared to C.
The first step for a client is finding peers to download files from. Trackers, web servers running over HTTP, serve as centralized locations for introducing peers to one another. Due to the centralization, the servers are at risk of being discovered and shut down if they facilitate illegal content exchange. Thus, making peer discovery a distributed process is a necessity for preventing trackers from following in the footsteps of the now-defunct TorrentSpy, Popcorn Time, and KickassTorrents.
The client starts off by reading a .torrent file, which describes the contents of the desired file and how to connect to a tracker. The information in the file includes the URL of the tracker, the creation time, and SHA-1 hashes of each piece, or a chunk of the file. One file can be made up of thousands of pieces – the client will need to download the pieces from peers, check the hashes against the torrent file, and finally assemble the pieces together to finally retrieve the file. For the implementation, [Jesse] chose to keep the structures in the Go program reasonably flat, separating application structs from serialization structs. Pieces are also separated into slices of hashes to more easily access individual hashes.
Next, a GET request to an `announce` URL in the torrent file announces the presence of the client to peers and retrieves a response from the tracker with the list of peers. To start downloading pieces, the client starts a TCP connection with a peer, completes a two-way BitTorrent handshake, and exchanges messages to download pieces.
One interesting data structure exchanged in the messages is a bitfield, which acts as a byte array that checks which pieces a peer has. Bits are flipped when their respective piece’s status changes, acting somewhat like a loyalty card with stamps.
While talking to one peer may be straightforward, managing the concurrency of talking to multiple peers at once requires solving a classically Hard problem. [Jesse] implements this in Go by using channels as thread-safe queues, setting up two channels to assign work and collect downloaded pieces. Requests are later pipelined to increase throughput since network round-trips are expensive and sending blocks individually inefficient.
The full implementation is available on GitHub, and is easy enough to use as an alternative client or as a walkthrough if you’d prefer to build your own.
Remember those actions movies like The Fast and the Furious where cars are constantly getting smashed by fast flying bullets? What would it have taken to protect the vehicles from AK-47s? In [PrepTech]’s three-partDIY compositevehicle armor tutorial, he shows how he was able to make his own bulletproof armor from scratch. Even if you think the whole complete-collapse-of-civilization thing is a little far-fetched, you’ve got to admit that’s pretty cool.
The first part deals with actually building the composite. He uses layers of stainless steel, ceramic mosaic tiles, and fiberglass, as well as epoxy resin in order to build the composite. The resin was chosen for its high three-dimensional cross-linked density, while the fiberglass happened to be the most affordable composite fabric. Given the nature of the tiny shards produced from cutting fiberglass, extreme care must be taken so that the shards don’t end up in your clothes or face afterwards. Wearing a respirator and gloves, as well as a protective outer layer, can help.
After laminating the fabric, it hardens to the point where individual strands become stiff. The next layer – the hard ceramic – works to deform and slow down projectiles, causing it to lose around 40% of its kinetic energy upon impact. He pipes silicone between the tiles to increase the flexibility. Rather than using one large tile, which can only stand one impact, [PrepTech] uses a mosaic of tiles, allowing multiple tiles to be hit without affecting the integrity of surrounding tiles. While industrial armor uses boron or silicon carbide, ceramic is significantly lower cost.
The stainless steel is sourced from a scrap junkyard and cut to fit the dimensions of the other tiles before being epoxied to the rest of the composite. The final result is allowed to sit for a week to allow the epoxy to fully harden before being subject to ballistics tests. The plate was penetrated by a survived shots from a Glock, Škorpion vz. 61, and AK-47, but was penetrated by the Dragunov sniper rifle. Increasing the depth of the stainless steel to at least a centimeter of ballistic grade steel may have helped protect the plate from higher calibers, but [PrepTech] explained that he wasn’t able to obtain the material in his country.
Nevertheless, the lower calibers were still unable to puncture even the steel, so unless you plan on testing out the plate on high caliber weapons, it’s certainly a success for low-cost defense tools.
An off-shoot of the infamous “How to Make (Almost) Anything” course at the Massachusetts Institute of Technology, “How to Grow (Almost) Anything” tackles the core concepts behind designing with biology – prototyping biomolecules, engineering biological computers, and 3D printing biomaterials. The material touches elements of synthetic biology, ethics of biotechnology, protein design, microfluidic fabrication, microbiome sequencing, CRISPR, and gene cloning.
In a similar fashion to the original HTMAA course, HTGAA works by introducing a new concept each week that builds up to a final project. Students learn about designing DNA experiments, using synthesized oligonucleotide primers to amplify a PCR product, testing the impact of genes on the production of lycopene in E coli., protein analysis and folding, isolating a microbiome colony from human skin and confining bacteria to image, printing 3D structures that contain living engineered bacteria, and using expansion microscopy (ExM) to visualize a mouse brain slice. The final projects run the gamut from creating a biocomputer in a cream to isolating yeast from bees.
Growing out from an initiative to create large communities around biotechnology research, the course requires minimal prior exposure to biology. By working directly with hands-on applications to biodesign concepts, students are able to direct apply their knowledge of theoretical biology concepts to real-world applications, making it an ideal springboard for bio-inspired DIY projects. Even though the syllabus isn’t fully available online, there’s a treasure trove of past projects to browse through for your next big inspiration.
Curious about past computer architectures? Software engineer [Fabien Sanglard] has been experimenting with porting Another World, an action-adventure platformer, to different machines and comparing the results in his “Polygons of Another World” project.
The results are pretty interesting. Due to the game’s polygon-based graphics, optimizations vary widely across different architectures, with tricks allowing the software to run on hardware released five years before the game’s publication. The consoles explored are primarily from the early ’90s, ranging from the Amiga 500, Atari ST, IBM PC, and Super Nintendo to the Sega Genesis.
The actual game contains very little code, with the original version at 6000 lines of assembly and the PC DOS executable only containing 20 KiB. The executable simply exists as a virtual machine host that reads and executes uint8_t opcodes, with most of the business logic implemented with bytecode. The graphics use 16 palette-based colors, despite the Amiga 500 supporting up to 32 colors. However, the aesthetics still fit the game nicely, with some very pleasant pixel art.
There’s a plethora of cool tricks that emerge in each of the ports, starting with the original Amiga 500 execution. Prior to the existence of the CPU/GPU architecture, microprocessors had blitters – logic blocks that rapidly modified data within the memory, capable of copying large swathes of data in parallel with the CPU, freeing up the CPU for other operations.
To display the visuals, a framebuffer containing a bitmap drives the display. There are three framebuffers used, two for double buffering and one for saving the background composition to avoid redrawing static polygons. Within the framebuffer, several tricks are used to improve the graphical experience. For scenes with translucent hues, special values are interpreted from the framebuffer index by “reading the framebuffer index, adding 0x8 and writing back”.
Challenges also come when manipulating pixels given each machine’s CPU and bus bandwidth limitations. For filling in bits, the blitter uses a feature called “Area Fill Mode” that scans left to right to find edges, rendering the bit arrays with spaces between lines filled in. Since the framebuffer is stored in five separate areas of memory – or bitplanes – this requires drawing the lines and filling in areas four times, multiplying by the hundreds of polygons rendered by the engine. The solution was to set up a temporary “scratchpad” buffer and rendering a polygon into the clean space. The polygon can then get copied to the screen area with a masked blit operation since the blitter can render anywhere in memory.
Intrigued? The series continues with deep dives into Atari ST, IBM PC, and upcoming writeups on SEGA Genesis/MegaDrive.
Lithium-ion batteries are notorious for spontaneously combusting, with seemingly so many ways that it can be triggered. While they are a compact and relatively affordable rechargeable battery for hobbyists, damage to the batteries can be dangerous and lead to fires.
Several engineers from the University of Illinois have developed a solid polymer-based electrolyte that is able to self-heal after damage, preventing explosions.The material can also be recycled without the use of high temperatures or harsh chemical catalysts. The results of the study were published in the Journal of the American Chemical Society.
As the batteries go through cycles of charge and discharge, they develop branch-like structures known as dendrites. These dendrites, composed of solid lithium, can cause electrical shorts and hotspots, growing large enough to puncture internal parts of the battery and causing explosive chemical reactions between the electrodes and electrolyte liquids. While engineers have been looking to replace liquid electrolytes in lithium-ion batteries with solid materials, many have been brittle and not highly conductive.
The high temperatures inside a battery melt most solid ion-conducting polymers, making them a less attractive option for non-liquid electrolytes. Further studies producing solid electrolytes from networks of cross-linked polymer strands delays the growth of dendrites but produces structures that are too complex to be recovered after damage. In response, the researchers at University of Illinois developed a similar network polymer electrolyte where the cross-link point undergoes exchange reactions and swaps out polymer strands. The polymers stiffen upon heating, minimizing the dendrite problem and more easily breaking down and resolidifying the electrolyte after damage.
Unlike conventional polymer electrolytes, the new polymer also shows properties of conductivity and stiffness increasing with heating. The material dissolves in water at room temperature, making it both energy-efficient and environmentally friendly as well.
If you were asked to imagine a particle accelerator, you would probably picture a high-energy electron beam contained within a kilometers-long facility, manned by hundreds of engineers and researchers. You probably wouldn’t think of a chip smaller than a fingernail, yet that’s exactly what the SLAC National Accelerator Laboratory’s Accelerator on a Chip International Program (ACHIP) has accomplished.
The Stanford University team developed a device that uses lasers to accelerate electrons along etched channels on a silicon chip. The idea for a miniature accelerator has existed since the laser’s invention in 1960, but the requirement for a device to generate electrons made the early proof-of-concepts difficult to manufacture in bulk.
The electromagnetic waves produced by lasers have much shorter wavelengths than the microwaves used in full-scale accelerators, allowing them to accelerate electrons in a far more confined space – channels can be shrunk to three one-thousandths of a millimeter wide. In order to couple the lasers and electrons properly, the light waves must push the particles in the correct direction with as much energy as possible. This also requires the device to generate electrons and transmit them via the proper channel. With an accelerator engraved in silicon, multiple components can fit on the same chip.
Within the latest prototype, a laser hits a grating from above the chip, directing the energy into a waveguide. The electromagnetic waves radiate out, moving with the waveguide until they reach an etched pattern that creates a focused electromagnetic field. As electrons move through the field, they accelerate and gain energy.
The results showed that the prototype could boost the electrons by 915 electron volts, equivalent to the electrons gaining 30 million electron volts over a meter. While the change is not on the scale of SLAC, it does scale up more easily since researchers can fit multiple accelerating paths onto future designs without the bulk of a full-scale accelerator. The chip exists as a single stage of the accelerator, allowing more researchers to conduct experiments without the need to reserve space in expensive full-scale particle accelerators.