Category Archives: Sci-Tech

The Met & Other Galleries Offer Remote Viewing Via Oculus Virtual Reality

The Metropolitan Museum of Art Launches New Immersive Virtual Reality and Online Feature with Iconic Works from Its Collection
The Temple of Dendur and works from the Arts of Oceania galleries have been transformed for virtual reality (VR) experience and on the web

The Met’s new features, created in collaboration with the platform Atopia, introduce a new way for cultural institutions around the world to build their own VR and online exhibitions(New York, November, 2025)— The Metropolitan Museum of Art has launched two new virtual reality (VR) featuresDendur Decoded and Oceania: A New Horizon of Space and Time, that explore the Museum’s beloved Temple of Dendur and monumental works from the Oceanic art collection in the newly reopening Michael C. Rockefeller Wing—such as the Ceremonial House Ceiling from the Kwoma people of Papua New Guinea, the Asmat bisj poles, and Atingting kon(slit gongs) from Vanuatu—in 3D. The experiences will allow global audiences to view these treasured galleries and works using a personal VR headset or on The Met’s website. Designed in collaboration with Atopia, a platform for immersive art and culture, The Met’s virtual experiences introduce a new way for art institutions to create and publish their own VR and web features, providing more digital access to VR innovations across the museum field.

The Met’s first VR experiences, Dendur Decoded and Oceania: A New Horizon of Space and Time were developed in close consultation with Met curators. They feature original, innovative storytelling and high-resolution 3D scans created by The Met’s Imaging team. This experience allows virtual visitors to delve into artworks through movement, sound, interaction, and play. From stepping inside the Temple of Dendur to bringing the 17-foot bisj poles to eye level, these virtual experiences offer a singular opportunity to explore these iconic works.

“The Met collection is enjoyed by millions of visitors a year, and by exploring the vast possibilities of virtual spaces, we can offer unparalleled cultural experiences to audiences no matter where they are located,” said Max Hollein, The Met’s Marina Kellen French Director and CEO. “These two new VR and web features foreground compelling storytelling and curatorial scholarship, and they provide immersive, participatory access to some of The Met’s remarkable works of art.”

Annabell Vacano, founder of Atopia, said, “Until now, immersive exhibitions were bespoke and expensive. We created Atopia so museums of all sizes could design, publish, and scale interactive storytelling so their collections can be accessed from anywhere in the world. The Met has been an incredible partner in designing Atopia’s storytelling tools, and it’s been an honor to work with their world-class teams.”

Dendur Decoded
The Dendur DecodedVR and web experience is organized as a vividly detailed adventure arranged in four “acts” and includes over 150 newly presented pieces of content, including materials (images and video) from archives at The Met and UNESCO. The content was created in collaboration with Isabel Stünkel, Curator, Department of Egyptian Art, and Erin Peters, Assistant Professor, Art History & Visual Culture at Appalachian State University; with support from Diana Craig Patch, Lila Acheson Wallace Curator in Charge of Egyptian Art, and Janice Kamrin, Curator in Egyptian Art at The Met.

It begins with “Act I: Explore Dendur,” which introduces the Temple and helps visitors learn how to read aspects of the temple’s decoration, and continues with “Act II: Dendur in Nubia,” presenting a 3D and 360-degree film about the Temple of Dendur’s original location along the West bank of the Nile River and how it was dismantled as part of the international UNESCO Campaign to Save the Monuments of Nubia to protect it from being submerged beneath Lake Nasser and then awarded to the United States in 1967. “Act III: Reconstructing Dendur” invites visitors to virtually rebuild part of the temple and learn how The Met reassembled it in New York in a new gallery that was opened to the public on September 27, 1978. “Act IV: Reflection” showcases past MetLiveArts performances and the ways in which contemporary artists have been inspired by the Temple. There is also an optional opportunity to leave a personal contemplation or observation through a voice note.

Oceania: A New Horizon of Space and Time
Oceania: A New Horizon of Space and Time celebrates the dazzling Oceanic works in the Museum’s newly reopened Michael C. Rockefeller Wing. Fifteen objects are contextualized with sound, story, and a spatial design inspired by an outdoor environment that evokes the Pacific Islands. Within the space, these objects are accompanied by illuminating content such as immersive original audio and Pacific storytelling, archival imagery, 360-degree video, and high-resolution 3D models. Featuring works from across The Met collection of Oceanic art, highlights in the VR and web experience include The Met’s impressive Ceremonial House Ceiling, which evokes the polychrome interior of a men’s ceremonial house in the Sepik River region of Papua New Guinea five soaring upright spirit poles (bisj) from the Asmat people of Western New Guinea; and the 14-foot-tall Atingting kon (slit gong) from Vanuatu.

In this exploratory environment there is a lush virtual gallery populated by the 3D-scanned objects and immersive soundscapes. Examples include the Sawos Ancestor Figure, which invites close looking through a compelling audio story about a battle in which the ancestral figure came to life, paired with an interactive 3D model. The Ceremonial House Ceiling includes a game where visitors discover motifs across the 270 pangal (painted panels), including crocodiles, insects, and cassowaries. The Body Mask, created by an Asmat artist, includes contemporary photography by Joshua Irwandi, a documentary photographer based in Jakarta, Indonesia, showing how these masks are made and worn by the Asmat people of southwest New Guinea. For the Silo, Jarrod Barker.

Developed along with Maia Nuku, The Met’s Evelyn A. J. Hall and John A. Friede Curator for Arts of Oceania, and Sylvia Cockburn, Senior Research Associate for Arts of Oceania, the experience will be animated with voices from across the Pacific Islands, including a greeting by Michael Mel (PhD, performance artist, lecturer, curator, and teacher and currently Senior Lecturer and Head of Expressive Arts Department at the University of Goroka), and a concluding sunset ceremony by Che Wilson (Ngāti Rangi-Whanganui, Tūwharetoa, Mōkai Pātea, Ngāti Apa, Ngā Raurua), a Māori leader with a career that spans cultural advocacy, governance, and leadership.

VR and Online Innovations for the Cultural Sector
For The Met’s virtual experiences, the Museum’s Emerging Technology and Digital department worked collaboratively with Atopia to develop a feature that will enable museums of all sizes to design and publish similar immersive exhibitions in-house. Through a “no-code” editor available on the platform, museum curators and designers can drag and drop images, 3D scans, and didactic information from their collections into virtual spaces. These can then be launched on the platform, becoming instantly available on the web and in VR.

Access and Availability
The two immersive exhibitions are available now for free on The Met’s website and on Meta Quest 2/3/3s Audio across the experience is closed caption.

Atopia is compatible with both standard web browsers on a desktop and laptop and on personal VR headsets. It also supports both individual and invite-only multiplayer visits.

Related Programs
These VR and web features will also be activated through several events, including Met Expert Talks. These talks include the opportunity for Museum visitors to interact with the virtual experiences on headsets provided by The Met for a deeper and more contextualized viewing. There will also be VR pop-ups at Teens Take The Met on May 15, 2026, as well as during an upcoming Teen Friday Career Labs, where teens can hear directly from the VR creative team. For homebound audiences unable to visit the new Arts of Oceania galleries in person, special Collection Tours will be offered for Oceania: A New Horizon of Space and Time via headsets provided by the Museum. More details and VR events at The Met will be announced.

Credits 
Dendur Decoded and Oceania: A New Horizon of Space and Time were created with a cross-disciplinary team from across The Met, led by Brett Renfer, Senior Project Manager of Emerging Technologies, along with Curatorial, Education, Imaging, and Digital.

This project is made possible by the Director’s Fund.

About The Metropolitan Museum of Art
The Met presents art from around the world and across time for everyone to experience and enjoy. The Museum lives in two iconic sites in New York City—The Met Fifth Avenue and The Met Cloisters. Millions of people also take part in The Met experience online. Since it was founded in 1870, The Met has always aspired to be more than a treasury of rare and beautiful objects. Every day, art comes alive in the Museum’s galleries and through its exhibitions and events, revealing both new ideas and unexpected connections across time and across cultures. Discover more at metmuseum.org.

About Atopia
Atopia is a new way to experience culture online. From any web browser or VR headset, audiences can step inside immersive exhibitions designed by leading museums worldwide. Our no-code platform empowers cultural institutions to create and share virtual experiences at scale—bringing exhibitions to global audiences beyond physical walls. Our mission: to open access to culture everywhere. Discover more at https://atopia.space

How Japan’s Government Created the World’s Most Sinister Cars

You know the look: A long, low-slung sedan finished in shiny black paint with equally bright chrome rolls through town. Beige, burgundy, and blue cars move out of the way, magnetically repelled by the menacing four-door. 

This threatening style has been idolized by Hollywood since the 1960s, perhaps most famously in the unfortunately short-lived ABC television program The Green Hornet, in which actor Van Williams drove a Chrysler Imperial modified by Dean Jeffries. It was painted black, of course, and the chrome slats that ran horizontally across its huge grille clearly meant business—even on the 19-inch TV screens that took up considerable living room real estate in a 1960s home. 

Black paint, while popular today, was a daring, high-style choice in the 1960s that was not-so-subtly influenced by the largely chauffeur-driven cars that carried around heads of state and other major politicians. For instance, the Soviet Union’s KGB notoriously drove around in black-painted GAZ Chaika sedans that had a distinctly Detroit-inspired appearance. (The irony of which seems to have been lost.) 

An outsider might not expect Japan, where the pavement has been specifically engineered to be quiet, to have a small but mighty homegrown industry producing the world’s most ominous cars.

Nissan

The Japanese Royal Family Needed a Ride of Their Own

Dating back more than 1400 years, Japan’s Imperial Household Agency does just what its name suggests: it manages the royal family’s affairs. This is no easy task for a country so steeped in tradition. In fact, the Imperial Household Agency has more than 1000 civil servants, which stands in marked contrast to the self-funded, non-governmental managers of, say, the British and Swedish royal families. 

The Imperial Household Agency’s wide-ranging list of tasks includes everything from ensuring that the Emperor’s family is comfortable and healthy to organizing and overseeing ceremonies. In the early 1960s, the Imperial Household Agency called automakers together and told them to submit designs for an official state vehicle. The car needed to have four doors, be reasonably spacious, and have a prestigious but not overly ostentatious appearance. 

Nissan

Prior to World War II, the Emperor’s vehicle fleet consisted of large, imported cars from brands like Rolls-Royce and Daimler. The company’s nascent automotive industry focused on small, mostly work-oriented vehicles. By the early 1960s, Japan’s recovery from the war’s devastating effects was well underway, fueled heavily by Western investment. While Japan didn’t give up on its traditions, the bright lights of Tokyo had a strong American influence. So too did the country’s cars, like the Toyota Crown that looked like last season’s Chevy. So when the Imperial Household Agency came calling, it should come as no surprise that the results looked rather Detroit-ish.

The winner was a brand you might not have heard of: Prince Motor Company. Founded in 1947, Prince was Japan’s short-lived flagship automaker in the early 1960s, though it was in the midst of being folded into Nissan.

The Prince Royal that got the royal nod, so to speak, was based on the Prince Gloria, a vehicle already used by the Japanese government in an official capacity. The Prince Royal was extended to provide those in back with stretch-out legroom, and the rear doors were modified to open coach-style for easier and more elegant access. While not a particularly showy car, the Prince Royal has an understated elegance. Its stacked headlights recall the Ford Galaxie and the big W108-generation Mercedes-Benz models. The tall greenhouse, on the other hand, is a nod to practicality rather than style. Inside, in the Japanese luxury tradition, the wool seats make nary a peep as passengers slide across. Leather would be rather squeakier.

Prince Royal gained the Imperial Household car
The Prince Royal gained the Imperial Household Agency’s nod as transport for the Emperor of Japan. These cars served until 2006, when they were replaced by a special version of the Toyota Century.Nissan

Underhood, the Prince Royal utilized a 6.4-liter V-8—not Japan’s first, but only a couple of years after the so-called “Toyota Hemi.” An eight-cylinder design was, admittedly, an odd choice; while inherently fairly smooth, the engine was undoubtedly a costly thing to develop. Fewer than 10 were ever built, one of which lives at the unusual and yet highly appealing Nissan Engine Museum and Guest Hall next to the company’s powertrain factory in Yokohama, Japan.

Just five Prince Royals were built, and they stayed in service for a staggering 40 years, when they were replaced by a limousine version of the Toyota Century. But the Century doesn’t really owe its status to the Prince Royal. It should thank the Nissan President, a model that was developed back when Nissan and Prince were quasi-competitors.

1982 President Type-C
Into the 1980s, the Nissan President retained a classic, but hardly ostentatious, look as seen on this 1982 President Type-CNissan

The President, as its name suggests, was intended from the start as a government vehicle. Unlike Toyota’s Crown, the first Japanese car to use a V-8, the President was developed in direct response to the Imperial Household Agency’s request. At nearly 200 inches long, the President was a very large sedan by Japanese standards. Its styling is contemporary if a bit bland, even in comparison to the Prince Royal. Horizontal headlights embedded in a broad, generic grille give way to fenders that had an almost Ford Falcon modesty to them. There’s a bit more drama at the rear with big NISSAN badging. Copious chrome lines the rocker panels.

While the Prince Royal ended up being chosen to transport the Emperor, Nissan’s President didn’t go home empty-handed. Instead, it was used by the country’s Prime Minister. Government versions were only minimally modified compared to the President models sold through Nissan’s dealership network in Japan, though official-use models were invariably painted black. Those available to consumers came in a slightly wider range of colors. The President was a sign that its owner—and, most likely, the person riding in the back—had arrived. It was the Lincoln Continental of its era. Today, when government spending is closely watched by a hawkish public, there is no U.S.-market comparison.

Nissan wool upholstery
In Japan, fabric upholstery like the wool seen in the 1973 Nissan President remains an indicator of a high-end vehicle because it makes no sound as a human slides across it.Nissan

Nissan didn’t dominate government contracts, but it was a commanding presence into the late 1980s. Then, almost inexplicably, the brand gave up. Its chrome-laden second-generation President, which was based on an early 1970s design, was replaced with a comparatively plebian design that would be sold in the U.S. as the Infiniti Q45. That’s not to say that the Q45 was a dud, but its big plastic bumpers and, in Japanese-market spec, Jaguar-ish grille were not in keeping with tradition. The Imperial Household Agency famously rejected a stretched version of the 1990 President in favor of the Toyota Century.

Toyota’s Century Begins

The original Toyota Century was overshadowed, at least to a degree, by the Nissan President that beat it to the market in Japan and initially secured more government contracts.Toyota

Thanks in part to the floodgates of 25-year-old vehicles from Japan, the Toyota Century has something of a cult status among enthusiasts in the U.S. today. It was not always this way; while the Century was undoubtedly a high-tech vehicle at its 1967 debut, the Imperial Household Agency initially passed it up in favor of the Nissan President. However, the Century’s rise coincided with Toyota’s phenomenal growth in the 1970s and 1980s, when it began to overtake Nissan as the premier Japanese automaker.

The original Century ran for three decades, always with V-8 power. Despite the fact that its specs and power could have appealed to buyers in Europe and, especially, the U.S., it was rarely sold in left-hand-drive markets. (Toyota flirted with the idea in the early 2000s before concluding that the conservative Century would be no match for the comparatively flamboyant Mercedes-Benz S-Class.)

Toyota

Yet it’s the Century that endures in Japan, an icon in its own time. The Emperor of Japan rides around in a stretched one, approved by the Imperial Household Agency, of course. The redesigned model that arrived in 2018 carries on the 1960s original style in marked contrast to the edgy, modern look found in any Toyota or Lexus model. There’s even an SUV version now, though its front-wheel-drive architecture and hybrid V-6 powertrain mean it’s more like a snazzy Toyota Highlander than a bespoke Emperor-hauler.

Toyota

Clearly, the Century has won out, so much so that Toyota recently announced it will position the Century as its own brand as a more conservative sibling to Lexus. It did face some limited competition from Mitsubishi with its mid-1960s Debonair. While the Mitsubishi, with its slab sides and fenders that leap forward past its grille, is basically a rolling villain, the four- or six-cylinder sedan lacked the interior volume and the power to compete with the Century or the President. Its angular 1986 replacement, which looked sort of like a K-Car with fender mirrors, was anything but debonair.

Mitsubishi Debonair front three quarter
Though its effort was comparatively short-lived, the Mitsubishi Debonair boasted a fantastic name and slab-sided Lincoln Continental-inspired looks, if not Conti-style proportions.Mitsubishi

The Yakuza Turns State Cars Into Mafia Cars

Nobody does organized crime like the Japanese—and that is not meant as a compliment. The Yakuza, as the Japanese crime syndicates are broadly known, hit its peak right around the time when the decidedly more upstanding Imperial Household Agency was asking automakers to design a state vehicle.

Those vehicles were soon appropriated by the Yakuza. In retrospect, they have a sinister, angry look. If the bad guy in a period flick drives a car in Tokyo, it’ll be a President, a Century, or perhaps an early Debonair. Set in 1999, HBO’s Tokyo Vice puts the Q45-adjacent Nissan President front and center. While it may not have been the vehicle of choice for the Emperor, that era’s President was the car to have for the heads of organized crime. Perhaps that’s why Nissan steered away from tradition with its final redesign, a swoopy model unsuccessfully sold here as the Infiniti Q70.

1990 Nissan President
The 1990 Nissan President abandoned the 1960s-style chrome bumpers of its predecessors.Nissan

These big, black sedans have an authoritarian presence. Their drivers may think they have impunity. Not only are their cars imposing, but they look official—even if those inside are doing anything but official business. Yakuza members often mounted curtains inside their Presidents and Centurys, a style known as VIP that persists today—albeit in a much broader and harder-to-define look. 

We have no direct equivalent in Canada or the US., at least in terms of how the criminal underground appropriated cars meant for high-ranking government officials. The Crown Victorias once favored by Canadian and American cops lack the luxury and exclusivity of a Century or President. A Chevy Tahoe can’t be all that menacing if you can find dozens of them in the carpool line at your local elementary school. And while our head of state has long had a highly modified Cadillac-ish limousine, which has been described as a tank with a limousine body, it lacks a showroom counterpart. That said, the crested wreath brand made a strong appearance in the late-1990s/early-2000s setting of HBO’s The Sopranos.

It’s a different story in Japan, though. There, a government official arrives in black-and-chrome style—as dictated, if indirectly—by the edicts set forth by the Imperial Household Agency. The automotive equivalent of a tuxedo is, after all, always in style. For the Silo, Andrew Ganz/Hagerty.

Amazon Outage Created Perfect Hacker Conditions

AWS Outage Created “Perfect Storm” for Social Engineering Attacks 

Last week Amazon Web Services (AWS) went down worldwide, including here in Canada, causing a ripple effect, from governments and local municipalities, to enterprises, small businesses and the individuals who rely on these services daily. 

AWS is a cloud-based service thousands of major companies use to not only store their data, but run their apps and software for many critical business services.  

Whether basic communications using apps such as Snapchat, Signal and Reddit to airlines such as Delta and United reporting disruptions to their customer facing operations, when these services go down it highlights the reliance on just a few cloud services companies (AWS, Microsoft Azure, and Google Cloud) to ‘run the country’ so to speak. 

The AWS outage has further impacted shopping websites, banking apps, and even streaming and smart homes devices.

And while organizations scramble to ensure business operations continue to run, it’s also an opportunity for individuals to do a quick check-in on their own cyber hygiene. 

Cybercriminals and hackers can easily take advantage of these types of outages to deploy an array of social engineering attacks. 

Whether in the office or at home, nothing is more frustrating than losing the ability to access files and documents, and communicate with business associates or loved ones, especially in an emergency or crisis.  

Hackers who rely on mass urgency and panic will see this as an opportunity to take advantage of people’s heightened emotions with phishing emails offering to “fix” the issue and get you back online and into your accounts or apps.  

But in reality, these scammers are looking to steal your personal information, such as login credentials by tricking you into updating your software or resetting your password.   

During major outages, users should avoid clicking on any links in emails, texts and pop-ups claiming to be able to fix the outage. 

Additionally, double check that any alerts or update messages from organizations, such as your bank or payment apps, are verified from the official website or app.   

This is the time to make sure you are using a strong password and multifactor authentication to prevent any unauthorized access to your accounts. 

Delay Things

However, individuals should also delay making sensitive transactions, such as major financial transactions, resetting your password, or installing critical software updates, until the service in question has been announced as officially restored. 

Furthermore, when the service disruption has ended, users should also monitor any affected accounts for unusual activity, discrepancies, and duplicate or fraudulent transactions. 

Finally, this is an excellent reminder for individuals to make sure they have a back-up system in place to access important documents and for communications.  

This can be as easy as keeping a secondary email account or even a back-up mobile phone. For the Silo, Stefani Schappert.

ABOUT THE AUTHOR

Stefanie Schappert, MSCY, CC, Senior Journalist at Cybernews, is an accomplished writer with an M.S. in cybersecurity, immersed in the security world since 2019.  She has a decade-plus experience in America’s #1 news market working for Fox News, Gannett, Blaze Media, Verizon Fios1, and NY1 News.  With a strong focus on national security, data breaches, trending threats, hacker groups, global issues, and women in tech, she is also a commentator for live panels, podcasts, radio, and TV. Earned the ISC2 Certified in Cybersecurity (CC) certification as part of the initial CC pilot program, participated in numerous Capture-the-Flag (CTF) competitions, and took 3rd place in Temple University’s International Social Engineering Pen Testing Competition, sponsored by Google.  Member of Women’s Society of Cyberjutsu (WSC), Upsilon Pi Epsilon (UPE) International Honor Society for Computing and Information Disciplines. 

ABOUT CYBERNEWS

Friends of The Silo, Cybernews is a globally recognized independent media outlet where journalists and security experts debunk cyber by research, testing, and data. Founded in 2019 in response to rising concerns about online security, the site covers breaking news, conducts original investigations, and offers unique perspectives on the evolving digital security landscape. Through white-hat investigative techniques, Cybernews research team identifies and safely discloses cybersecurity threats and vulnerabilities, while the editorial team provides cybersecurity-related news, analysis, and opinions by industry insiders with complete independence. 

Cybernews has earned worldwide attention for its high-impact research and discoveries, which have uncovered some of the internet’s most significant security exposures and data leaks. Notable ones include:

  • Cybernews researchers discovered multiple open datasets comprising 16 billion login credentials from infostealer malware, social media, developer portals, and corporate networks – highlighting the unprecedented risks of account takeovers, phishing, and business email compromise.
  • Cybernews researchers analyzed 156,080 randomly selected iOS apps – around 8% of the apps present on the App Store – and uncovered a massive oversight: 71% of them expose sensitive data.
  • Recently, Bob Dyachenko, a cybersecurity researcher and owner of SecurityDiscovery.com, and the Cybernews security research team discovered an unprotected Elasticsearch index, which contained a wide range of sensitive personal details related to the entire population of Georgia. 

Supercharge Your Vinyl Setup With These Tools

Audio-Technica expands turntable accessory offerings for all vinyl enthusiasts
Stow, OH, October, 2025 — Our friends at Audio-Technica, a leading innovator in transducer technology for over 60 years, are excited to launch a new range of turntable accessories designed to help vinyl listeners achieve the best from their record collections. The latest additions include two new slip-mats, precision alignment tools and a stainless-steel disc stabilizer.
These new additions join Audio-Technica’s established lineup of turntable accessories including the AT6012 Record Cleaning Kit, stylus cleaners and more, expanding a complete family of products designed to help vinyl users care for and enjoy their collections to the fullest.

New to the Audio-Technica Slipmat series is the AT-SMCR2 Cork-Rubber Slipmat (MAP: $35.00 usd/ $49.00 cad) and AT-SMC1 Cork Slipmat (MAP: $25.00 usd/ $35.00 cad). The AT-SMCR2 is engineered from a premium blend of cork and rubber to absorb a wide range of vibrations, particularly at lower frequencies, delivering clearer audio reproduction. The cork-rubber blend also provides antistatic properties to reduce pops and clicks caused by static discharge. For listeners seeking a simpler option, the AT-SMC1 provides excellent resonance control and a stable playback surface without shedding particles or attracting dust like traditional felt mats.

Beyond vibration control, Audio-Technica introduces two new cartridge alignment tools designed to ensure precise playback geometry: the AT-VTAZ1 Azimuth + VTA Alignment Tool (MAP: $14.00 usd/ $20.00 cad) and AT-CAP1 Cartridge Alignment Protractor ($17.00 usd/ $24.00 cad). The AT-VTAZ1 allows users to achieve accurate tonearm height and cartridge azimuth adjustment. Proper alignment ensures even stylus wear, accurate channel balance, and minimal distortion. The AT-CAP1 utilizes the widely used Baerwald alignment method to set cartridge offset angle and null points to deliver optimal tracking and reduced distortion.

The new AT628a Stainless Steel Disc Stabilizer (MAP: $79.00 usd/ $111.00 cad )is designed to minimize resonance and keep records firmly in place during playback. The stabilizer accommodates even slightly warped records with two recessed inner rings on its underside for secure contact.

Rounding out the new launches are the AT-ST3 Speaker Stands (MAP: $59.00 usd/ $83.00 cad), designed to enhance the performance of the AT-SP3X or other similarly sized bookshelf speakers. Constructed from rigid alloy steel with vibration-damping cork feet, each stand provides stable support for speakers weighing up to 3 kg (6.6 lb). The 13-degree angled design directs sound upward for clearer projection and helps reduce sound wave reflections off hard surfaces, ensuring cleaner, more accurate audio reproduction.

For the Silo, Jarrod Barker.

Audio-Technica was founded in 1962 with the mission of producing high-quality audio for everyone. As we have grown to design critically acclaimed headphones, turntables and microphones, we have retained the belief that great audio should not be enjoyed only by the select few, but accessible to all. Building upon our analog heritage, we work to expand the limits of audio technology, pursuing an ever-changing purity of sound that creates connections and enriches lives.

Canadian Company To Help Astronauts Return To Moon In 2026

ALUULA Composites, super-strong, lightweight polyethylene material is now being used to develop expandable habitats for NASA’s astronauts to live safely and comfortably on the moon for the 2027 planned landing. 

This small company on Canada’s west coast is playing a big role to help astronauts return and orbit the moon in 2026.

Artemis II crew members (from left) CSA (Canadian Space Agency) astronaut Jeremy Hansen, and NASA astronauts Christina Koch, Victor Glover, and Reid Wiseman walk out of Astronaut Crew Quarters inside the Neil Armstrong Operations and Checkout Building to the Artemis crew transportation vehicles prior to traveling to Launch Pad 39B as part of an integrated ground systems test at Kennedy Space Center in Florida photo: NASA

ALUULA Composites recently signed an agreement with Max Space, an American company, to use its innovative composite material to build space habitats on the moon. The company’s ultra-high-molecular-weight polyethylene (UHMWPE) laminate will be used to create a large living and working area for NASA’s astronauts when they return to the moon in September 2026. 

The innovative material was selected because it has eight times the strength-to-weight ratio of steel and is extremely durable, which is ideal for space travel.

The Max Space team with their new expandable space habitat. photo: Max Space

The first Max Space inflatable space habitat is slated to launch with SpaceX in 2026. The Max Space inflatables can be delivered into space in very small packages and then unfolded and expanded to create a much larger work space. For the Silo, Paul Clarke.

Would You Use AI For Buying A Car? One In Four Buyers Already Do

A recent consumer survey backed by similar results from Elon University reveals that AI adoption for car shopping is skyrocketing, rapidly becoming a standard part of the automobile buying process. This as fully one in four buyers have already used AI tools this year to research, compare prices, negotiate and otherwise outsmart dealerships, and an overwhelming 88% found it helpful. Signaling a seismic shift in the way North Americans are now shopping for cars, nearly half of consumers indicated plans to use AI in their next purchase. Not just for buyer benefits, dealerships are gleaning critical business intelligence from AI to inform sales strategies, train staff and elevate customer engagement. The below  report from our friends at CarEdge, which offers its own AI Negotiator car buying tool saving shoppers thousands, details the first data-backed look at how AI tools are reshaping the car buying experience.

Mornine- AI powered car dealership robot.

Study: 1 in 4 Car Buyers Tap AI for Better Deals


Artificial intelligence is changing the way North Americans buy cars, and it’s a transition that is happening quickly. In the first-ever survey of its kind, CarEdge asked 500 car shoppers if they’re using AI tools like ChatGPT to research, compare, and negotiate during the car buying process. The results confirm a major shift is underway. One in four car buyers in 2025 are already using AI tools to gain an edge, and future buyers are even more likely to embrace these technologies.

Car buyers are finding AI to be a valuable tool. Among those who used tools like ChatGPT, Perplexity, Google Gemini, and others, 88% said it was helpful. AI is quickly becoming a trusted co-pilot for car buyers.

Key Findings: Car Buying Is Changing

The 2025 CarEdge AI & Car Buying Survey reveals a clear and growing trend: AI tools are quickly becoming part of the car buying process for a significant portion of consumers. Here are the standout findings:

1 in 4 Car Buyers Use AI 

25% of car buyers in 2025 say they used or plan to use AI tools like ChatGPT during the shopping or buying process. This contrasts with a recent survey by Elon University that found 52% of Americans now use AI large language models. While signs point towards increased adoption of AI tools, the CarEdge survey found that most car buyers are still in the early stages of integrating these tools into high-stakes decisions like vehicle purchases. This suggests there’s still significant room for growth in AI adoption amongst car buyers.

AI Use Is Accelerating

Among those who haven’t bought a car yet this year, 40% say they are using or plan to use AI tools during their search or deal-making. This is nearly 3x higher than the 14% seen among those who already bought a car earlier in the year.

AI Tools Deliver Results

Among those who used AI:

  • 88% say the tools were helpful
  • 32% found them very helpful
  • 60% used them “a lot” during the process

The AI Holdouts: Drivers Who Lease

Of the respondents who had already leased a car in 2025, none reported using any AI tools.

The AI-Adopting Buyer: Who’s Using It, and How?

AI adoption among car buyers is still in its early stages, but clear trends are beginning to emerge.

Among Buyers Who Already Purchased in 2025:

Just 14% of those who already bought a vehicle this year used AI tools during the process. Adoption rates were nearly identical across new and used buyers, with 14% in each group saying they used AI tools.

Among Future Car Buyers:

The numbers jump significantly when looking at those who haven’t yet bought in 2025. Among this group — who represent 39% of total respondents — 40% say they either already use or plan to use AI tools during their car search and buying process.

That’s more than triple the current usage rate among recent buyers, suggesting AI adoption is accelerating as awareness grows and tools become easier to use.

This group also appears to be more proactive: 60% of those who used AI tools during their buying journey said they used them “a lot,” while 40% used them only occasionally.

What Car Buyers Are Using AI Tools

AI tools are quickly becoming essential research companions for car shoppers looking to make more informed, confident decisions. After all, why go it alone when a wealth of automotive knowledge powered by large language models (LLMs) is right in your pocket?

Among buyers who used AI tools during their car purchase or lease process, here’s how they put them to work:

88% — Researching Vehicles

The most common use by far, AI tools helped buyers learn about different models, trims, features, and reliability. For many, it was like having an always-available expert to explain the pros and cons of their options.

64% — Comparing Prices and Market Values

Buyers used AI to better understand fair pricing, from invoice pricing to out-the-door. 

44% — Learning Negotiation Strategies

Nearly half of AI users leaned on these tools to prepare for conversations with salespeople. Whether role-playing negotiation scenarios or asking how to spot add-on fees, this group used AI to level the playing field at the dealership.

11% — Exploring Finance and Lease Options

A much smaller portion of buyers used these tools to become familiar with leasing vs. financing, how to calculate payments, and similar queries.

Industry Implications

Car buying has always been tilted in favor of the dealership. Information asymmetry — what the dealer knows versus what the customer knows — has long been the source of consumer frustration, confusion, and overpayment.

That dynamic is beginning to shift.

This survey confirms what many in the industry are only starting to realize: AI is giving car buyers the upper hand. Tools like ChatGPT are helping consumers cut through the noise, ask smarter questions, and avoid common dealership traps. Instead of relying on guesswork or scattered advice, buyers are turning to AI for fast, personalized guidance at every step.

But one auto industry veteran has words of caution for buyers relying heavily on AI tools.

It’s both surprising and a little scary to see how quickly people are turning to AI to guide such a major financial decision,” said Ray Shefska, Co-Founder of CarEdge. “While tools like ChatGPT can be powerful, they’re only as good as the data behind them. AI should complement your research, not replace your own critical thinking.

That perspective underscores the real takeaway of this report: AI works best when it’s used thoughtfully as a tool, not as a crutch. In an age where automation raises fears of job loss or decision-making without human oversight, this survey offers a more optimistic view — one where technology helps everyday consumers make smarter choices. Used wisely, AI can help level the playing field and bring more transparency and fairness to the car buying experience.

Methodology

This survey was conducted by CarEdge between June 19 and June 24, 2025. A total of 500 U.S. respondents participated, recruited through the CarEdge email newsletter and social media channels. Questions were tailored based on buying status to better understand how and when AI tools were used in the car shopping process.

For the Silo, Karen Hayhurst.

About CarEdge
Founded in 2019 by father-and-son team Ray and Zach Shefska, CarEdge is a leading platform dedicated to empowering car shoppers with free expert advice, in-depth market insights, and tools to navigate every step of the car-buying journey. From researching vehicles to negotiating deals, CarEdge helps consumers save money, time, and hassle, hundreds of thousands of happy consumers have used CarEdge to buy their car with confidence. With trusted resources like the CarEdge AI Negotiator tool, Research Center, Vehicle Rankings and Reviews, and hundreds of guides on YouTube, CarEdge is redefining transparency and fairness in the automotive industry. Follow them on YouTubeTikTokX,  Facebook, and Instagram for actionable car-buying tips and market insights. Learn more at www.CarEdge.com.

Our Horse Powered Past Drove Today’s Auto Tech

Where did auto tech start?

A horse and buggy. Excellent horse-power huh? People got tired of the nurturing it took to take care of a work horse. People wanted more and as with anything the need for something better fuels the spark for innovation. How about something to do work, but doesn’t need rest? Doesn’t need medication? Doesn’t need someone to shovel up its crap? Take this formula and you get the steam engine, not a crazy engine, but an engine none-the-less. Suddenly the glowing aura of potential is perceivable, right on the horizon. Now we can have multiple horse-power without the care. Still needed someone to shovel though.

The Horsey Horseless. Designed to prevent horses being frightened by a car.

Enter, the mother of current automotive technology today, the oil industry.

Instead of burning coal, why not find some ways to refine oil to be used as fuel sources to run things on? Who knows, we could have been running advanced versions of steam engines today? They actually can be made to be fairly efficient and clean using current technology and were quite practical cars back in 1918.

Then the internal combustion engine enters the scene the oil companies love this, and a mass marketed engine that is completely dependent on oil is born. Just think, this is awesome for business, these engines need oil for fuel and lubrication. Then all the different designs start flowing. (Off the top of my head and in no chronological order) The single cylinder, then 2, then 4, then 6, then the flathead V8. Now this is where we start to see major horse-power and design improvements. The trusty ole’ inline 6’s, the small block eating slant 6’s,The overhead valve V engine, big blocks, small blocks, Hemi’s. There are pancake engines, W engines, rotary engines, v-tecs, boxster engines and many, many more. (Not to mention all of the different fuel delivery systems!)

The cylinder and valves and crankshaft of the Internal Combustion Engine

The one thing that really makes me scratch my head is the fact that it took so long getting hybrids, smart-cars, electric cars, and hydrogen cars that are actually worth looking at and driving. I mean, why is it that I can take a full size 2008 Chevrolet Silverado with a 5.3 L vortec engine, put: a cold air intake, a magnaflow exhaust system, and a good edge products programmer, and I can get an average of over 36miles per gallon, with the same horse-power? Why is it that I (not being an automotive engineer) can do this, but you can’t just buy one with those numbers from the manufacturer?


Not to mention brown-gas converters that have been tested on most common engine types that can take, mineral water, and a reaction from current between two electrified plates (similar to a car battery) and create a safe amount of hydrogen gas as a by-product which can make your car run the same on half the amount of fuel. The thing that boggles me is that most people have never even heard of these. You can buy the plans off the internet (not as complicated as it sounds) or I can even get ready to install ones from my performance part supplier. I just find it strange that automotive technology and fuel sources have taken this long to start to veer just slightly away from oil (or as ‘ol Jed calls it “Texas tea”).

At one point we bridged the gap from a horse and buggy to a steam engine, and then to internal combustion. With the technology we have now, we should have much higher mpg’s and horse-power or an extremely viable alternative. It really makes me wonder where we might be now if this technology was steered in a different direction from the start. It’s been over 100 years now of improving the same technology using more or less the same fuel source. There are guys in the States who run their own garage refined deep fryer grease to power their small pickups and VW buses. There are guys who run pickups off wood-fire smoke. Just something to think about. 

For the Silo, Robb Price.

Let’s Transform Canada’s AI Research Into Real World Adoption

October, 2025 – Canada has world-class strength in AI research but continues to fall short in widespread adoption, according to a new report from the C.D. Howe Institute. On the heels of the federal government’s announcement of a new AI Strategy Task Force, the report highlights the urgent need to bridge the gap between research excellence and real-world adoption.

In “AI Is Not Rocket Science: Ideas for Achieving Liftoff in Canadian AI Adoption,” Kevin Leyton-Brown, Cinda Heeren, Joanna McGrenere, Raymond Ng, Margo Seltzer, Leonid Sigal, and Michiel van de Panne note that while Canada ranks second globally in top-tier AI researchers and first in the G7 for per capita publications, it is only 20th in AI adoption among OECD countries. “This matters for the economy as a whole, because such knowledge translation is a key vehicle for productivity growth,” the authors say. “It is terrible news, then, that Canada experienced almost no productivity growth in the last decade, compared with a rate 15 times higher in the United States.”

The authors argue that new approaches to knowledge translation are needed because AI is not “rocket science”: instead of focusing on a single industry sector, the discipline develops general-purpose technology that can be applied to almost anything. This makes it harder for Canadian firms to find the right expertise and for academics to sustain ties with industry. Existing approaches – funding academic research, directly subsidizing industry efforts through measures such as SR&ED and superclusters, and promoting partnerships through programs like Mitacs and NSERC Alliance – have not solved the problem.

Four ideas to help firms leverage Canadian academic strength to fuel their AI adoption include: a concierge service to match companies with experts, consulting tied to graduate student scholarships, “research trios” that link AI specialists with domain experts and industry, and a major expansion of AI training from basic literacy to dedicated degrees and continuing education. Drawing on their experiences at the University of British Columbia, the authors show how local initiatives are already bridging gaps between academia and industry – and argue these models should be scaled nationally.

“Canada’s unusual strength in AI research is an enormous asset, but it’s not going to translate into real-world productivity gains unless we find better ways to connect AI researchers and industrial players,” says Kevin Leyton-Brown, professor of computer science at the University of British Columbia and report co-author. “The challenge is not that AI is too complicated – it’s that it touches everything. That means new models of partnership, new incentives, and new approaches to education.”

AI Is Not Rocket Science- 4 Ideas in Detail

Idea 1: A Concierge Service for Matchmaking

We have seen that it is hard for industry partners to know who to contact when they want to learn more about AI. Conversely, it is at least as hard for AI experts to develop a broad enough understanding of the industry landscape to identify applications that would most benefit from their expertise. Given the potential gains to be had from increasing AI adoption across Canadian industry, nobody should be satisfied with the status quo.

We argue that this issue is best addressed by a “concierge service” that industry could contact when seeking AI expertise. While matchmaking would still be challenging for the service itself, it could meet this challenge by employing staff who are trained in eliciting the AI needs of industry partners, who understand enough about AI research to navigate the jargon, and who proactively keep track of the specific expertise of AI researchers across a given jurisdiction. This is specialized work that not everyone could perform! However, many qualified candidates do exist (e.g., PhDs in the mathematical sciences or engineering). Such staff could be funded in a variety of different ways: for example, by an AI institute; a virtual national institute focused on a given application area; a university-level centre like UBC’s Centre for Artificial Intelligence Decision-making and Action (CAIDA); a nonprofit like Mitacs; a provincial ministry for jobs and economic growth; or the new federal ministry of Artificial Intelligence and Digital Innovation.

Having set up an organization that facilitates matchmaking, it could make sense for the same office to provide additional services that speed AI adoption, but that are not core strengths of academics. Some examples include project management, programming, AI-specific skills training and recruitment, and so on. Overall, such an organization could be funded by some combination of direct government support, direct cost recovery, and an overhead model that reinvests revenue from successful projects into new initiatives.

Idea 2: Consultancy in Exchange for Student Scholarships

Many businesses that would benefit from adopting AI do not need custom research projects and do not want to wait a year or more to solve their problems. The lowest-hanging fruit for Canadian AI adoption is ensuring that industry is well informed about potentially useful, off-the-shelf AI technologies. We thus propose a mechanism under which AI experts would provide limited, free consulting to local industry. AI experts would opt in to being on a list of available consultants. A few hours of advice would be free to each company, which would then have the option of co-paying for a limited amount of additional consulting, after which it would pay full freight if both parties wanted to continue. The company would own any intellectual property arising from these conversations, which would thus focus on ideas in the public domain. If the company wanted to access university-owned IP, it could shift to a different arrangement, such as a research contract. This system would work best given a concierge service like the one we just described. The value offered per consulting hour clearly depends on the quality of the academic–industry match, and some kind of vetting system would be needed to ensure the eligibility of industry participants.

Why would an AI expert sign up to give advice to industry? All but the best-funded Canadian faculty working in AI report that obtaining enough funding to support their graduate students is a major stressor. Attempting to establish connections with industry is hard work, and such efforts pay off only if the industry partner signs on the dotted line and matching funds are approved. There is thus space to appeal to faculty with a model in which they “earn” student scholarships for a fixed amount of consulting work. For example, faculty could be offered a one- semester scholarship for every eight hours set aside for meetings with industry, meaning that one weekly “industry office hour” would indefinitely fund two graduate students. Consulting opportunities could also be offered directly to postdoctoral fellows or senior (e.g., post-candidacy) PhD students in exchange for fellowships. In such cases, trainees should be required to pass an interview, certifying that they have both the technical and soft skills necessary to succeed in the consulting role. The concierge service could help decide which industry partners could be routed to PhD students and which need the scarcer consulting slots staffed by faculty members.

The system would offer many benefits. From the industry perspective, it would make it straightforward to get just an hour or two of advice. This might often be enough to allow the company to start taking action towards AI adoption: there is a rich ecosystem of high-performance, reliable, and open-source AI tools; often, the hard part is knowing what tool to use in what way. Beyond the value of the advice itself, consulting meetings offer a strong basis for building relationships between academics and industry representatives, in which the academic plays the role of a useful problem solver rather than of a cold-calling salesperson. These relationships could thus help to incubate Mitacs/Alliance-style projects when research problems of mutual interest emerge (though also see our idea below about how restructuring such projects could help further).

For academics, the system would constitute a new avenue for student funding that would reward each hour spent with a predictable amount of student support. Furthermore, it would offer scaffolded opportunities to deepen connections with industry. The system would come with no reporting requirements beyond logging the time spent on consulting. The faculty member would be free to use earned scholarships to support any student (regardless, for example, of the overlap between the student’s research and the topics of interest to companies), increasing flexibility over the Mitacs/Alliance system, in which specific students work with industry partners. Students who self-funded via consulting would learn valuable skills and would expand their professional networks, improving prospects for post-graduation employment.

Finally, the system would also offer multiple benefits from the government’s perspective. It would generate unusually high levels of industrial impact per dollar spent (consider the number of contact hours between academia and industry achieved per dollar under the funding models mentioned in Section 3). All money would furthermore go towards student training. The system would automatically allocate money where it is most useful, directing student funding to faculty who are both eager to take on students and relevant to industry, all without the overhead of a peer-review process. And it would generate detailed impact reports as a side effect of its operations, since each hour of industry–academia contact would need to be logged to count towards student funding.

Idea 3: Grants for Research Trios

Our third proposal is an approach for expanding the Mitacs/Alliance model to make it work better for AI. Industry–academia partnerships leverage two key kinds of expertise from the academic side: methodological know-how for solving problems and knowledge about the application domain used for formulating such problems in the first place. In fields for which the set of industry partners is relatively small and relatively stable, it makes sense to ask the same academics to develop both kinds of expertise. In very general-purpose domains like AI, it holds back progress to ask AI experts to become domain experts, too. Instead, it makes sense to seek domain knowledge from other academics who already have it. We thus propose a mechanism that would fund “research trios” rather than bilateral research pairings. Each trio would contain an AI expert, an academic domain expert, and an industry partner. This approach capitalizes on the fact that there is a huge pool of academic talent outside core AI with deep disciplinary knowledge and a passion for applying AI. While such researchers are typically not in a position to deeply understand cutting-edge AI methodologies, they are ideally suited to serve as a bridge between researchers focused on AI methodologies and Canadian industrial players seeking to achieve real-world productivity gains. In our experience at UBC, the pool of non-AI domain experts with an interest in applying AI is considerably larger than the pool of AI experts. One advantage of this model is that projects can be initiated by the larger population of domain experts, who are also more likely to have appropriate connections to industry. Beyond this, involving domain experts increases the likelihood that a project will succeed and gives industry partners more reason to trust the process while a solution is being developed. The model meets a growing need for funding researchers outside computer science for projects that involve AI, rather than concentrating AI funding within a group of specialists. At the same time, it avoids the pitfall of encouraging bandwagon-jumping “applied AI” projects that lack adequate grounding in modern AI practices. Finally, it not only transfers AI knowledge to industry, but also does the same to both the domain expert and their students.

Idea 4: Greatly Expanded AI Training

As AI permeates the economy, Canada will face an increasing need for AI expertise. Today, that training comes mostly in the form of computer science degrees. Just as computer science split off from mathematics in the 1960s, AI is emerging today as a discipline distinct from computer science. In part, this shift is taking the form of recognizing that not every AI graduate needs to learn topics that computer science rightly considers part of its core, such as software engineering, operating systems, computer architecture, user interface design, computer graphics, and so on. Conversely, the shift sees new topics as core to the discipline. Most fundamental is machine learning. Dedicated training in AI will require a deeper focus on the mathematical foundations of probability and statistics, building to advanced topics such as deep learning, reinforcement learning, machine learning theory, and so on. Various AI modalities also deserve separate study, such as computer vision, natural language processing, multiagent systems, robotics, and reasoning. Training in ethics, optional in most computer science programs, will become essential.

Beyond dedicated training in the core discipline, we anticipate huge demand for broad-audience AI literacy training; for AI minors to complement other disciplinary specializations; for continuing education and “micro-credential” programs; and for executive education in AI. There is also a growing need for “AI Adoption Facilitators”: bridge-builders who can help established workers in medium-to-large organizations understand how data-driven tools could offer value in solving the problems they face. Training for this role would emphasize business principles and domain expertise, but would also require firmer foundations in machine learning and data science than are currently typical in those disciplines.

Read the full report via our friends at C.D. Howe Institute here.

SETI Search For Space Aliens Increases Odds With Your Computer

Zuhra Abdurashidova
Zuhra Abdurashidova

I graduated from the University of California at Berkeley about a decade ago with a degree in Mechanical Engineering. I received two job offers, one from SETI to work on high performance signal processing and the other from industry.

One does not simply walk away from SETI, so I had the pleasure of joining the Berkeley SETI Research Center (BSRC). I received a warm welcome and was promptly sent to West Virginia to help install a new SETI system at the Green Bank Telescope.

There was a steep learning curve, but I was fascinated by BSRC’s work and couldn’t wait to actually understand what was going on.

As it turns out, our group is looking to expand its computing power, providing the ability to look at more star systems with habitable planets, expand the involvement of volunteers and acquire larger volumes of data; in short, broaden the search and increase our chances of intercepting a signal. Now I’m working on setting up new servers, network hardware, and signal-processing systems at Green Bank. We’re hoping to get data flowing and recording soon, and make it available for the interested public.

From the 19th-century idea of drawing a giant Pythagorean triangle in the Siberian tundra to signal extraterrestrials, to our current collection of servers storing and analyzing data, it is not hard to see how much progress has already been made.

Running SETI software on your home computer looks like this.
Running SETI software on your home computer looks like this.

Funding from the Breakthrough Initiatives is spawning new projects that would not have been otherwise possible. SETI@home is planning to work with Breakthrough Listen to collect and distribute data from the Green Bank and Parkes telescopes. However, in order to sustain the whole SETI@home effort we could still use support from our devoted SETI@home contributors.

Recently, I spent a day at the Bay Area Science Festival talking to kids and their adults. I was fascinated by just how stoked kids are about SETI. Some came with prepared questions and showed incredible curiosity and intelligence. The BSRC team is hoping to inspire kids to pursue science careers and I think searching for life beyond Earth is a great way to get them interested and involved. I hope you continue your support for this fascinating endeavor, and keep your eyes on the stars.  For the Berkeley SETI Research Center team, Zuhra Abdurashidova.

Berkeley SETI Research Center Logo

Supplemental- via nemesis maturity YouTube channel

Wow Signal – Scientists say that if the signal came from extraterrestrials, they are likely to be an extremely advanced civilization, as the signal would have required a 2.2-gigawatt transmitter, vastly more powerful than any on Earth.

The signal bore the expected hallmarks of non-terrestrial and non-Solar System origin.

One summer night in 1977, Jerry Ehman, a volunteer for SETI, or the Search for Extraterrestrial Intelligence, may have become the first man ever to receive an intentional message from an alien world. Ehman was scanning radio waves from deep space, hoping to randomly come across a signal that bore the hallmarks of one that might be sent by intelligent aliens, when he saw his measurements spike.

The signal lasted for 72 seconds, the longest period of time it could possibly be measured by the array that Ehman was using. It was loud and appeared to have been transmitted from a place no human has gone before: in the constellation Sagittarius near a star called Tau Sagittarii, 122 light-years away.

All attempts to locate the signal again have failed, leading to much controversy and mystery about its origins and its meaning.

http://en.wikipedia.org/wiki/Wow!_signal

http://en.wikipedia.org/wiki/Tau_Sagi…

http://www.bigear.org/wowmenu.htm

A Synth Holy Grail- Oberheim FVS-1

Considered by many to be The Holy Grail of Polyphonic Synthesis, this meticulously refurbished Oberheim FVS-1 took 88 hours of skilled vintage synth tech time via our friends at tonetweakers to perfect. The FVS-1 contains 4 classic Oberheim SEM modules, each providing a single dual oscillator voice. Sounds are dialed in manually on each module, with global control over the most tweaked parameters via the programmer module, where patches are also saved and recalled. Since each SEM is manually adjusted, it’s hard to get them sounding exactly the same. The result is a much more organic, slightly detuned, richer, truly magical sound than you’d get out of most other poly synths.

Famous users include Lyle Mays, 808 State, Depeche Mode, Styx, Pink Floyd, The Shamen, Gary Wright, Joe Zawinul and John Carpenter (yep the film director of The Thing, Big Trouble in Little China, Starman, Escape From New York and other classics often composed and recorded music for his movies). You won’t find a better example of this beautiful classic synthesizer, so if you’re looking for an exceptional 4 voice, now’s the time. Visit our friends at tonetweakers.com to learn more.

The OB Four Voice contains 4 SEMs and a mixer module. This beautiful instrument can play up to 8 oscillators at once, for insanely humongous sounds. 

One of the first

The 4 voice was one of the first polyphonic synths. Each of the four Synthesizer Expander Modules ( SEM ) can be assigned to a different note. Splitting voices between modules is also possible, as is a monophonic unison mode. A single voice is surprisingly powerful, offering 2 oscillators, 2 envelopes (1 for filter, 1 for volume), an LFO, pulse width modulation and a real sweet multimode filter with sweep-able mode (which few synths offered). The programmer module allows fast saving and recall of programmed sounds. With a combined 8 oscillators, these sound unbelievably fat. Even a single SEM sounds great. In unison mode, play all VCOs on one key for one of the most powerful vintage synth sounds ever. Nothing sounds like it to us and we’ve played everything. This is a personal favorite. This FVS-1 has the standard configuration of modules: 4 x Synthesizer Expander Module ( SEM ) Keyboard Output module Polyphonic Keyboard module Programmer module.

No clangs or zaps


If you are an analog synth head who makes musical sounds, you need one of these. To avoid disappointment though, we would recommend anyone looking for a dedicated sound effects machine to go for something else. This 4 voice is fabulous at musical tones and can make some interesting sound effects but there are better choices for clangs, zaps, explosions and similar atonal timbres. 

Other famous users include: Joe Zawinul, Chick Corea, Larry Fast (Synergy), Jan Hammer, Herbie Hancock, Human League, Michael McDonald / Doobie Brothers, Patrick Moraz, Steve Porcaro, The Shamen, Tim Simenon, Depeche Mode, Vince Clarke / Erasure, Tangerine Dream, Stevie Wonder and many other influential musicians who could afford one – this was a very expensive instrument when it came out! 

A Pathway To Trusted AI

Artificial Intelligence (AI) has infiltrated our lives for decades, but since the public launch of ChatGPT showcasing generative AI in 2022, society has faced unprecedented technological evolution. 

With digital technology already a constant part of our lives, AI has the potential to alter the way we live, work, and play – but exponentially faster than conventional computers have. With AI comes staggering possibilities for both advancement and threat.

The AI industry creates unique and dangerous opportunities and challenges. AI can do amazing things humans can’t, but in many situations, referred to as the black box problem, experts cannot explain why particular decisions or sources of information are created. These outcomes can, sometimes, be inaccurate because of flawed data, bad decisions or infamous AI hallucinations. There is little regulation or guidance in software and effectively no regulations or guidelines in AI.

How do researchers find a way to build and deploy valuable, trusted AI when there are so many concerns about the technology’s reliability, accuracy and security?

That was the subject of a recent C.D. Howe Institute conference. In my keynote address, I commented that it all comes down to software. Software is already deeply intertwined in our lives, from health, banking, and communications to transportation and entertainment. Along with its benefits, there is huge potential for the disruption and tampering of societal structures: Power grids, airports, hospital systems, private data, trusted sources of information, and more.  

Consumers might not incur great consequences if a shopping application goes awry, but our transportation, financial or medical transactions demand rock-solid technology.

The good news is that experts have the knowledge and expertise to build reliable, secure, high-quality software, as demonstrated across Class A medical devices, airplanes, surgical robots, and more. The bad news is this is rarely standard practice. 

As a society, we have often tolerated compromised software for the sake of convenience. We trade privacy, security, and reliability for ease of use and corporate profitability. We have come to view software crashes, identity theft, cybersecurity breaches and the spread of misinformation as everyday occurrences. We are so used to these trade-offs with software that most users don’t even realize that reliable, secure solutions are possible.

With the expected potential of AI, creating trusted technology becomes ever more crucial. Allowing unverifiable AI in our frameworks is akin to building skyscrapers on silt. Security and functionality by design trump whack-a-mole retrofitting. Data must be accurate, protected, and used in the way it’s intended.

Striking a balance between security, quality, functionality, and profit is a complex dance. The BlackBerry phone, for example, set a standard for secure, trusted devices. Data was kept private, activities and information were secure, and operations were never hacked. Devices were used and trusted by prime ministers, CEOs and presidents worldwide. The security features it pioneered live on and are widely used in the devices that outcompeted Blackberry. 

Innovators have the know-how and expertise to create quality products. But often the drive for profits takes precedence over painstaking design. In the AI universe, however, where issues of data privacy, inaccuracies, generation of harmful content and exposure of vulnerabilities have far-reaching effects, trust is easily lost.

So, how do we build and maintain trust? Educating end-users and leaders is an excellent place to start. They need to be informed enough to demand better, and corporations need to strike a balance between caution and innovation.

Companies can build trust through a strong adherence to safe software practices, education in AI evolution and adherence to evolving regulations. Governments and corporate leaders can keep abreast of how other organizations and countries are enacting policies that support technological evolution, institute accreditation, and financial incentives that support best practices. Across the globe, countries and regions are already developing strategies and laws to encourage responsible use of AI. 

Recent years have seen the creation of codes of conduct and regulatory initiatives such as:

  • Canada’s Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems, September 2023, signed by AI powerhouses such as the Vector Institute, Mila-Quebec Artificial Intelligence Institute and the Alberta Machine Intelligence Institute;
  • The Bletchley Declaration, Nov. 2023, an international agreement to cooperate on the development of safe AI, has been signed by 28 countries;
  • US President Biden’s 2023 executive order on the safe, secure and trustworthy development and use of AI; and
  • Governing AI for Humanity, UN Advisory Body Report, September 2024.

We have the expertise to build solid foundations for AI. It’s now up to leaders and corporations to ensure that much-needed practices, guidelines, policies and regulations are in place and followed. It is also up to end-users to demand quality and accountability. 

Now is the time to take steps to mitigate AI’s potential perils so we can build the trust that is needed to harness AI’s extraordinary potential. For the Silo, Charles Eagan. Charles Eagan is the former CTO of Blackberry and a technical advisor to AIE Inc.

Canada Space Agency -How Space Affects Our Body

Living in space has significant effects on the human body. As we prepare for journeys to more distant destinations like Mars, humankind must tackle these risks to ensure safe travel for our astronauts.


Canada Space Agency PSA space info

Auto Retail Finally Being Disrupted By AI

With AI reshaping everything from finance to fast food, the $1.5T auto retail industry is finally facing its overdue disruption. The typical car-buying experience—riddled with hidden fees, lead bloat, pricing games and low trust—has remained stubbornly analog. But now, with 90% of dealerships in America (and growing % in Canada and Mexico) experimenting with AI tools and 1 in 4 buyers already using AI to shop, the tide is turning. Agentic AI  technology is fundamentally reshaping one of the most significant purchases in a person’s life.

Zach Shefska, Co-Founder and CEO of CarEdge, asserts that agentic AI is the key to rebuilding trust, removing friction and leveling the playing field for both buyers and sellers. From AI-powered shopping assistants that negotiate on your behalf, to data tools that reveal deceptive dealership practices, Shefska is a pioneer in “agentic AI” — a new form of artificial intelligence bringing much-needed transparency to the industry.

  • The Broken Status Quo: Car buying is frustrating and inefficient for both consumers and dealerships—highlighting key stats like 72% sales staff turnover and 2% lead conversion from third-party platforms.
  • Lead Generation Platforms Are Failing: Legacy systems flood dealers with unqualified leads, drain resources, and deliver minimal value to consumers.
  • The Rise of Agentic AI in Auto Retail: Consumers are turning to tools like ChatGPT and CarEdge’s AI agent to navigate purchases with more confidence, speed, and clarity—25% are already doing it.
  • From Friction to Fluidity: Agentic AI replaces quantity with quality—streamlining the buyer’s journey, reducing information overload, and improving dealer efficiency.
  • The End of Pricing Games: AI tools now collect and publish out-the-door pricing from thousands of dealerships, exposing hidden fees and rewarding transparent sellers.
  • The Future of Negotiation: AI agents can negotiate on behalf of both buyers and sellers—minimizing stress, cutting transaction times from days to hours, and removing the adversarial edge.
  • Real-World Impact Stories:  One buyer saved $1,280 and hours of back-and-forth using CarEdge’s agentic AI—illustrating AI’s practical value in real-life scenarios.
  • AI Helps Honest Dealers Win: In a trust-starved industry, AI gives reputable dealers a new way to stand out by offering full transparency and faster deals.
  • What’s Next for AI in Auto Retail: The emerging frontier: AI agents dynamically collecting and updating real-time pricing and inventory data across markets to offer true market intelligence.

For the Silo, Zach Shefska. Zach is CEO of CarEdge, a leading platform—founded by father-and-son team Ray and Zach Shefska—dedicated to empowering car shoppers with free expert advice, in-depth market insights and tools to navigate every step of the car-buying journey. From researching vehicles to negotiating deals, CarEdge helps consumers save money, time and hassle. Alsop with trusted resources like the CarEdge Research Center, Vehicle Rankings and Reviews, and hundreds of guides on YouTube, CarEdge is redefining transparency and fairness in the automotive industry. Connect with Shefska at www.CarEdge.com or on social media on YouTubeTikTokX,  Facebook, and Instagram.

Buckminster Fuller’s Fascinating Unbuilt Buildings

Buckminster was a genius and his geodesic dome buildings were not only revolutionary in their construction but were also incredibly unique and memorable. Perhaps your grandparents attended Expo67 in Montreal (you guessed it, waaay back in 1967) and visited the United States Pavilion- read this snippet for a time capsule account:

“The United States exhibit, entitled Creative America, is designed to illustrate technological and esthetic inventiveness in the U.S.A.A huge transparent geodesic “bubble” contains a multi-level system of exhibit platforms interconnected by escalators, and walkways. The platforms support a variety of exhibit components specially selected or designed for the new environment created by the structure. Situated on Ile Sainte-Hélène close to the Métro station from which there is Minirail connection with the Expo-Express, the bubble is 20 stories high and has a spherical diameter of 250 feet .By day, the bubble glistens as the sun highlights the structure and, by night, the bubble “glows” from interior lighting. The interior exhibits reflect different aspects of the United States and include folk art, cinema and fine arts displays, as well as a space exhibit which is reached by a 125 foot escalator and a simulated lunar landscape supporting full scale lunar vehicles. A 300-seat theatre features a 3-screen color film showing the games children play.”

Photo- National Archives of Canada

If you think that was pretty amazing check out some of Buckminster’s buildings that unfortunately didn’t make it past the planning stage.

Fascinating Unbuilt Buildings

New Moon Rover Readies For 2030 Launch

VENTURI SPACE PRESENTS MONA LUNA, 
THE EUROPEAN LUNAR ROVER
MONA LUNA, designed by Sacha Lakic

Paris Air Show, Le Bourget, June 2025 – Venturi Space unveils MONA LUNA, its 100% European-built lunar rover. Designed to support the ambitions of the European Space Agency and the French CNES, the vehicle will be built at Venturi Space France’s facility in Toulouse. The ultimate aim is to provide Europe with a lunar-capable rover by 2030.

European autonomy in lunar mobility is a major strategic challenge. Venturi Space is helping to make that a reality with MONA LUNA, its upcoming lunar rover designed to meet the needs of ESA and national European space agencies. The vehicle will further Europe’s efforts to achieve technological independence in the field of lunar mobility, enabling it to get ahead of the industrial curve and achieve its space ambitions.

A project led by Venturi Space France 
Venturi Space France will oversee MONA LUNA’s development and space qualification from its base in Toulouse, coordinating every aspect of the process: onboard electronics, avionics, space-to-ground links, energy management systems, assembly, final integration, and acceptance testing in readiness for space flight. All with one clear objective: to deploy MONA LUNA at the Moon’s South Pole by 2030.

Backed by the ESA and CNES
The European Space Agency is supporting Venturi Space’s efforts to design and develop the critical technologies required for a large lunar rover, capable of surviving multiple lunar nights. ESA’s support validates Venturi Space’s approach and highlights its expertise. The project will draw on the experience acquired from the programmes to develop the FLIP and FLEX rovers under a strategic partnership with US-based company Venturi Astrolab, Inc. Venturi Space is currently designing and building the hyper-deformable wheels that will be fitted to those vehicles, along with the associated electrical systems (in Switzerland) and high-performance batteries (in Monaco).

Using technology made in Europe
MONA LUNA is designed to be carried into space by the Ariane 6.4 launch system and landed on the Moon’s surface by the European Argonaut lunar lander, while the rover itself will be equipped with a robotic arm to handle scientific instruments and payloads. It will be:
– electrically powered, recharging via solar panels,
– designed to move autonomously,
– equipped with three high-performance batteries,
– capable of carrying a wide range of payloads,
– designed to survive multiple lunar nights,
– capable of a top speed of 20 km/h,
– designed to weigh a total of 750 kg.

The rover could also be used in an emergency to carry an astronaut in difficulty, as envisaged by the ESA and CNES in their feasibility studies.
A clear commercial purpose
MONA LUNA’s maiden mission will focus on purely scientific applications, but future deployments could be organized to meet demand from the European private sector for a variety of purposes, including carrying payloads to the South Pole, exploiting lunar resources (such as helium-3) in situ, or even public outreach campaigns. This approach will help establish a sustainable long-term economic model for the rover, in much the same way as the early development of terrestrial mobility.


Gildo Pastor, President of Venturi Space:
“I’m still an explorer, first and foremost. Space is a new frontier, and MONA LUNA is how we are actually going to broach it. Alongside Europe, we aim to build an autonomous lunar exploration capability to meet the scientific, economic, and strategic challenges of tomorrow.”

Dr. Antonio Delfino, Director of Space Affairs at Venturi Space:
“Our primary focus is to make ourselves fully available to the ESA and European national space agencies. With MONA LUNA, we aim to deliver major technological breakthroughs that will pave the way for extended lunar mobility.”

For The Silo, Jarrod Barker.

5 Tips for Regularly Driving Your Vintage Car

Summer, and thus driving season, is currently in full swing for much of Canada. Most of us that have them are trying to drive our classics every chance we get. Here are some vital reminders to heed if your vintage ride gets called up into everyday action.

Where I live is currently in the beautiful pocket of time where the mornings are cool yet bright and the sun only really gets hot in the middle of the afternoon. All of my cars love this weather, and I love driving just that little bit more. So I’m trying to drive as much as I can, and if you are doing the same, here are a handful of reminders for the times your vintage ride gets called up into more routine service.

Before we dive in though, it’s worth mentioning that old cars were once new cars. Someone drove and treated my Chevrolet Corvair the way I currently behave while behind the wheel of my wife’s Jeep Renegade—a daily driver. Traffic 30, 40, or even 90 years ago was radically different than traffic today, and many of our common-sense habits have shifted meaning to the point that what makes total sense for you in your old car will look insane to a common road user. While old cars require an additional amount of care and attention to be used regularly, driving your car is the best thing you can do for it. Don’t be scared of using the car exactly how it was intended.

Old cars have old brakes

Model A cast brakes
Fresh wheel bearings and drums made for a big improvement in drivability and safety on my Model A Ford.Kyle Smith

It’s easy to get lured into driving like those around you, but be careful. Without notice, you’ll find yourself tailgating at the same distance as the modern cars, and when that line of cars taps the brakes, suddenly the concept of 5-mph bumpers doesn’t seem so comical.

Vintage brakes can be made to work very well with a bit of care and attention, but even I have to admit vintage designs and materials just cannot compare to modern brakes—that is before even mentioning driver assist systems like anti-lock braking or emergency braking. Give yourself plenty of room.

Check your fluids often

Triumph spitfire hood up
Kyle Smith

Modern cars have spoiled us with the ability to drive thousands of miles without opening the hood. Regardless of how you feel about the separation between driver and mechanic over time, driving your vintage car on more than just a couple weekends a month requires staying on top of topping of fluids.

Old engines can and often do consume oil at a rate much higher than modern engines. Add in even just a small leak and suddenly the bottom of the dipstick is bone dry and before long, so is the oil pickup. Engine oil also helps cool an engine, so keeping oil topped up helps for multiple reasons beyond just proper lubrication. Also keep an eye on brake fluid and coolant.

Get used to the gauges

Modern car gauges are “normalized,” meaning that they often will be basically stationary while driving despite slight fluctuation in the pressures, temperatures, and levels they monitor. On older cars, a coolant temp gauge might rise slightly when caught at a long stoplight, but it might not actually be a cause for concern. Most automotive engines operate best when coolant temps are between 180 and 210 degrees Fahrenheit. Modern gauges will be stationary for that entire range, but an old-school mechanical gauge will transmit everything. This means coolant temp could drop slightly when you turn on the heater, or increase some with long periods of idling or while an air conditioner is cycling.

Make your escape plans

Traffic on the IH-10 Katy Freeway viewed facing west near Loop 610
Smiley N. Pool/Houston Chronicle/Getty Images

Even in great condition, aging cars can and do break down. Think through what common failures might occur with your car and formulate a plan for how you will handle the situation. This can mean packing a tool kit, re-upping your roadside assistance membership, or choosing routes and times of day that will help ensure you have a smooth trip. Some vintage cars will have zero trouble in modern traffic, but if yours tends to overheat or get cranky sitting still, make sure that you scout an escape route, should you get snarled in traffic. Being stuck on the side of the road is infinitely better than being stuck in the middle of the road. Trust me. There are a few roads around town that I avoid in my vintage cars due to the lack of shoulder or safe place to veer off. Paranoid? Maybe a little, but I don’t want to get hit while sitting on the side of the road.

Be aware of your tires

Classic Car Tires Ganz Alfa Romeo
Andrew Ganz

Modern tires are downright amazing and often go underappreciated. Since vintage cars get less mileage than their modern counterparts, a lot more people are willing to drive on older or poor-condition tires, sometimes out of pure ignorance or from lack of inspection. Tread depth and age are big considerations, but if you’re running modern reproductions of older tire designs, there is also the way those tires handle water. Siping and water control have a huge impact on handling and braking. You might have brand-new tires, but if the design is 50 years old, they are going to handle that way. Again, not a bad thing, but something to adjust to. For the Silo, Kyle Smith.

Why So Many Struggle With Brain Fog

Why are so many people still struggling with brain fog, chronic fatigue, low energy, impaired memory, diminished focus, high stress and ADHD symptoms—even despite years of trying treatments for many? Neurotologist Dr. Kendal Stewart believes it’s because we’re too often treating symptoms, not causes. He’s spent the last 25 years addressing that with science-backed ways to help people at every age improve how they feel and function, both immediately and long-term. As an authority in everyday brain health, Dr. Kendal Stewart helps individuals optimize focus, memory, resilience and other brain health concerns by transforming complex neurological science into simple, actionable lifestyle-based strategies.  


Dr. Stewart has spoken at length and has written many editorials to discuss real-world habits, tactics and solutions to reduce brain fog, feel more energized, support focus, maintain emotional balance, and preserve cognitive health as one ages. A variety of related topics that include:

  • What brain fog, anxiety, and immune dysfunction have in common—and how to address all three holistically
  • Fueling your brain and immune system based on your unique DNA
  • Actionable daily habits to support brain and immune system health
  • How your genetics dictate your brain & immune health–and how to decode it
  • Why the future of medicine is personalized and already here
  • What are neuroimmune disorders, and why are we seeing a rise in conditions like chronic pain?
  • How genetic testing removes guesswork in treating complex neurological conditions
  • Hope for the undiagnosed: Dr. Stewart’s approach to finding the ‘source’ when other treatments fail
  • Why we’re still getting brain fog wrong—and what to do instead
  • A neurotologist’s take on impaired memory, focus, stress and fatigue: stop treating just the symptoms
  • Easy ways to support your brain & immune system every day
  • Does your DNA hold the key to focus, energy & emotional balance?
  • Genetics meets neuroscience for personalized brain health

What distinguishes Dr. Stewart?

  • Medical Maverick: One of the few specialists bridging neurotology (brain-ear balance) and neuroimmune genetics to treat complex disorders.
  • DNA-Driven Results: Nearly every patient receives genetic testing to eliminate guesswork—a game-changer for conditions like autism, chronic fatigue, and concussions.
  • Science Meets Storytelling: From IV therapies to nutrigenomics, he translates cutting-edge science into actionable steps for families and high performers.
  • Media-Ready: A charismatic speaker and podcast personality with patented tech, a supplement line (Neurobiologix), and a mission to “recover” patients, not just manage symptoms.


About the Expert
Dr. Stewart is a board-certified neurotologist and nationally recognized expert in neuroimmune disorders—including genetic abnormalities, chronic pain, ADD/ADHD and autism spectrum conditions. With advanced training in both surgery and cellular science, he’s made it his life’s work to uncover root causes and tailor individualized solutions through genetic testing, functional medicine, and integrative care. His approach emphasizes prevention as much as treatment, using lifestyle, nutrition, and nervous system support as daily fuel for better brain function. Through his work, he provides practical tools to regulate stress, stabilize energy and boost mental clarity.

He’s also a sought-after keynote speaker, inventor, and founder of multiple healthcare innovations, including GX Sciences, SensoryView, and Neurobiologix—a company dedicated to improving individual well-being by developing cutting-edge nutritional supplements rooted in the science of nutrigenomics. Dr. Stewart brings not only clinical authority but also an empowering, real-world lens—helping families, patients, and professionals better understand and improve nervous system and immune function. For the Silo, Karen Hayhurst.






Seriously, What The F**k Do We Know About Reality?

Just over two decades ago in a small theater in Yelm, Washington a little film called What The Bleep Do We Know?!? screened to its first audiences and the term “I Create My Reality” was thrust into the collective consciousness. One of the themes explored is the idea that individuals have the power to create their own reality through their thoughts and intentions. This concept is illustrated through Amanda’s experiences and supported by discussions on the nature of consciousness and its potential influence on the physical world.

Since then countless films and books have extolled the wonders of quantum physics and how understanding the nature of reality could change your life, often in just 3 easy steps. I too thought it was easy, heck I made a movie about it! And for a while it was easy, until I realized that I had only scratched the surface of what “it” all means.

For sure, at a party I could rattle off the wonders of quantum this and quantum that, I could throw around words like entanglement and heady concepts like The Copenhagen Theory, I could wow you with the double slit like nobody’s business. But the truth was, it was really all just smoke and mirrors.

Fewer Than 42 Facts About Douglas Adams

What did understanding quantum physics have to do with my happiness?

What did understanding the workings of the brain mean to my life, in reality, at least this reality, the one where I have kids and bills to pay? I mean it’s fun to dream about other dimensions and my life as electron popping in and out, but in the end I felt as though it was becoming mental masturbation an easy way to escape from the fact that even though I knew I wasn’t really touching that chair, that it is possible I wasn’t even real.

Betsy was one of the three filmmakers (along with Willliam Arntz and Mark Vicente) involved in What the Bleep Do We Know !?
Betsy was one of the three filmmakers (along with Willliam Arntz and Mark Vicente) of What the Bleep Do We Know !?

What I was truly seeking was not the facts about how that chair manifested itself into my reality, but how I could be happy whether I had that chair or not.

Happiness has nothing to do with quarks and the discovery of the Higgs Boson was not going to bring me ever-lasting peace and joy. That I was going to have to find all on my own.

I began to explore the sacred cows, not only in my life, my beliefs about who I was and what I wanted, but also the sacred cows of spirituality, new thought and yes, quantum physics and how I could take all this knowledge and use it to create the happiness I sought, because after all, that is what we are all after. It is why we ask “why?”. It is why we explore the deepest depths of the quantum foam and so far quantum physics hasn’t found the happiness particle, because it doesn’t exist within the particles out there, it exists within the immeasurable particles within me.

For the Silo, Betsy Chasse.

image courtesy of: science.howstuffworks.com
Image courtesy of: science.howstuffworks.com

What the Bleep Do We KnowSupplemental-

Roger Ebert review Photo of Roger Ebert In Memoriam

https://www.rogerebert.com/reviews/what-the-do-we-know-2004

What Makes A Gas Mask A Great Gas Mask?

Sometimes great things can come from unexpected places. When our friends at kommandostore.com were hit up by an Italian scuba diving company for CBRN-Rated Gas Masks a few years back, they were very intrigued.
Mestel Safety, under ‘Ocean Reef Group’, makes the “SGE 400-3” — a gas mask that thinks completely outside the box — a favorite all-rounder on the gas mask market. 
SEE THINGS CLEARER.
 As usual, kommandostore will be offering the full suite of masks (a CBRN-approved and non-CBRN approved version*), filters, and eyeglass inserts.*More on that later below
A look under the sea – how military scuba diving had an important impact on the design of this unorthodox gas mask… 
UNDER-WATER ORIGINS
Ocean Reef Group, Mestel Safety’s parent company, actually specializes in all kinds of equipment for undersea exploration. AndiIt all started with rubber — Giorgio, Ruggero, and Gianni Gamberini worked at a tire repair shop in Genoa, Italy.​ During their experimentation with rubber compounds at the time, they were approached by pioneer of scuba diving and legend of the Italian Navy, Luigi Ferraro. He wanted to make rubber masks and fins for scuba diving based off his experience. From the successful designs that resulted, a sprawling Italian scuba industry was born. 

Commander Luigi Ferraro pictured in his diving gear. He was part of the “Gamma” sapper group, who performed some of the first major underwater stealth operations in WWII with the aid of very-early SCBA equipment. He would go on to sink 3 enemy ships by himself during a long sabotage operation, becoming one of the few people to have received Italy’s highest Naval honor (the Gold Medal) and live to tell the tale. The gif shows examples of some of the equipment he really used, including a damaged Panerai dive watch, and the aforementioned scuba fins. Quite the backstory.
 But like all good materials sciences, one of its breakthroughs resulted from a mistake. An “Incorrect” mix of rubber ended up also being the first buoyant rubber compound, incredibly important in the making of flippers.The Gamberini brothers would also pioneer some of the first rubber watch straps, which were a massive upgrade in comfort & security in comparison to leather straps that would degrade in the salty depths. 
This is about as good as scuba gear got in the 50s and 60s. On this gentleman’s left hand, you can see his dive watch with a stainless steel wrist strap. While still incredibly popular today even amongst avid scuba divers, they weren’t ideal for military use due to their reflectivity.
 Their company Ocean Reef would go on to pioneer the design of the first ever full-face mask for snorkeling use. It featured an almost entirely transparent facepiece with an incredible field of view, which would “float” in front of the rubber that sealed to your face, reducing felt weight.  Sounds like these would be great features on a gas mask, eh? They had the same feeling too… 
 “Mestel Safety”, their medical & safety division, would use everything they learned with their pedigree in undersea engineering, and the very gas mask we’re presenting today would be born. From the depths of the Mediterranean to a position of respect in military & civil applications, Ocean Reef has come a long way, and they definitely earned their spot amongst the best. 
COMBAT CAPABILITY
 Don’t be spooked by the unconventional design — these masks are tough as nails.Mestel safety tested their masks by barraging the facepiece with, quote, “6.35mm steel spheres going over 300 mph”. For some reason the specificity makes it sound hilarious, but that’s practically like being shot directly in the face with a BB gun over and over and shrugging it off – not bad one bit. So, rest assured, this thing can probably handle some projectiles from common workshop incidents and Airsoft matches. 
 Probably its most visually obvious feature is, once again, the insane Field of View. It preserves nearly 90% of your vision without significant “warping” and makes it pretty usable with firearms like many mil-contract masks on the market. But when you put on the average military mask, you’ll be stunned at how much you can’t see in comparison. 
Having a massive split in the mask reduces the ocular overlap for your eyes and does, in fact, impede your vision right away. It’s why masks like the Avon M50 feature a single unified eyepiece instead of the classic two-piece styled masks of the cold war.
 Lastly, these are comfortable to wear over very long durations thanks to the “floating” facepiece design.  It allows the rubber to seal perfectly to the shape of your face, and takes the “felt weight” off of your face and onto the harness, where it should be. 
 We could go on about the cool factor of this mask for a lot longer but if you want to take a closer look at the mask you should investigate the product pages 👇 
KNOW THE DIFFERENCE!
 An important side note on “CBRN” capability: If you’re looking for the model with 90% of the capability at a reduced price, the silicone-rubber based model is what you’re going to want to pick up. So what’s that other 10%? We’ll keep it simple: the butylated rubber, or just “butyl rubber” adds the ‘R’ and ‘N’ protections to CBRN, (Chemical, Biological, Radiological, Nuclear) *.  *If you’re actually planning on dealing with those extra threats or the ‘blister agents’ that can also bypass a silicone seal, you’re going to need way, way more than just the mask to protect yourself anyways. Think a full HAZMAT suit with chemical tape, gloves, booties. And that’d only be for an hour or two of exposure to some of these more deadly agents. In addition to having the right equipment, the best plan is to simply GTFO. 
 The TL;DR is that this mask will cover you (literally) in most other incidents where a civilian might want full-face protection, from civil unrest to forest-fire evacuation, and of course common household projects.It’s simple: pick up the ‘BB’ model if you are interested in having the full ‘CBRN’ capability at the cost of slightly reduced comfort.And remember: A gas mask is only as good as the filter you’re breathing through, and we have a plethora of information about the excellent filters we’re also stocking from Mestel. 
Another cool feature: there’s 3 different positions for filters to be placed to your heart’s desire.
 One other note: the lack of ‘NIOSH’ approval for these masks is a bit misleading. Since these are European-made masks, they fall under ‘CE’ standards, which work a bit differently than NIOSH approval. An explanation of these standards can be found on kommandostore’s product page. 
 Whether this is your first serious use gas mask with actual pedigree or you’re looking for an affordable alternative to the mil-contract priced (expensive) masks, we’re confident that the SGE 400/3 will be the baby bear’s porridge. Once again, take a look at the product pages — you’ll find everything from sizing info to a free copy of the user’s manual if you’d like to read up. 

For the Silo, Jarrod Barker.

A Quest To Build My First Synthesizer

I started out creating sound experiments while in high school, circa 1980 with circuit bent hardware and a cheap Casio keyboard.

I then entered the working world and forgot all about making music. Fast forward 30+ years, and the itch to make experimental music overtook me again, but now technology had changed drastically. I no longer needed hardware. I discovered apps on my iPhone, and music platforms like SoundCloud and Bandcamp were all that I needed. I was immediately obsessed.

Within a couple years, I had filled over seven free SoundCloud accounts, and two Bandcamp albums  as well as an artist page  with experimental music, and having a great time doing it. But, I started to grow tired of using the same software.

stylophone synthI yearned to use hardware/instruments again, but not being able to play an instrument is a definite hindrance 🙂 I searched for cheap keyboards on the net. I soon discovered the “Stylophone” and ordered one ‘sight unseen’. It was unique, inexpensive and fun, but quite limited in sound variety. I started mixing the Stylophone with app produced sounds/music, as well as other “found sounds”. (I really appreciate the functionality of software based mixing apps, which are almost essential to my creations these days). I then stumbled upon a couple of user videos of the Hyve synthesizer, and knew I had to have it. It was clearly non-musician friendly (and looked so different, cool and fun).

Then came the disappointment …

You can’t buy one! (BUT I HAD TO HAVE ONE!!!) Turns out, the engineer/designer guru behind this awesome device (Skot Wiedmann), had (Hard to believe but it’s been almost a decade since I made this trip!) work shops in the Chicago area, and you can go build your own, ( very inexpensively ). I knew what I had to do. I looked at a map, saw that Chicago was about 8 hours away from me here in Ontario, Canada and realized that I had to go build it. I started to plan the trip. I knew that a fellow SoundCloud musician and Facebook friend (Leslie Rollins) lived in Berrien Springs, Michigan, about 2 hours outside of Chicago.

This presented a twofold opportunity. I could hopefully, meet Leslie face to face, and hopefully have a place to spend the night. I contacted Les and everything was A-OK! I purchased a ticket to build my Hyve, and started to plan my road trip. The workshop was going to be from Noon to 3pm, on a Saturday in late September in a cool space called Lost Arts in Chicago.

I had the whole week off from work, because I was overseeing a contractor doing extensive yard work at my house all week, and I was hoping to leave Friday so as to arrive at Leslie’s place in the late afternoon or early evening, spend the night, and leave for the workshop Saturday morning. Alas, plans rarely work as hoped.

The contractor wasn’t finished until Friday afternoon, and Les wasn’t getting home from a business trip until late Friday night.
New plan! Early to bed Friday. Early to rise Saturday (2:30 am), and depart for Leslie’s place in Michigan. It was an easy drive, and I got to Berrien Springs (a beautiful sleepy little university village) around 8:30 am. Met Leslie, and got to trade stories over a great breakfast in a local cafe. Then, I quickly admired Leslie’s impressive modular synth racks at his home studio “Convolution Atelier” and then left for “Lost Arts” in Chicago.

Lost Arts is located in a cool old industrial complex.

The workshop provided everyone with a surface mount board with the touchpad on one side, and components layout on the back. A sheet listing components and placement was also handed out, along with tiny plastic tweezers. Everyone then had their component side “pasted” with a solder paste applied through a pierced template, in a process similar to silk screening. Everyone then started to receive their very tiny components from the parts list. Following the placement locations, the components (chips, capacitors, resistors, etc) were set into their pasted areas with the tweezers (magnification and extra lighting was a must). Once all the components were placed, they were carefully “soldered” into place by simply holding a heat gun over each component until the solder on the board had adhered it. Once this was done, everyone had their 9v battery and line-out jacks hand soldered into place by Skot , and then … the moment of truth, Skot tested each one for proper operation.

It was a fascinating process and great experience.

I met a lot of cool people at the workshop, both builders and staff/helpers! I can’t say enough what a fantastic experience this was, and what an awesome, diverse and versatile device the Hyve is. I doubted my sanity when planning this trip, but it turned out to be very rewarding!

Leslie and I then went back to Michigan, stopped at a local brewery in Berrien Springs (Cultivate) and sampled a few of their excellent brews, and then proceeded to Convolution Atelier to play with Leslie’s modular system. (I’m a newbie to all things modular, and I received a great crash course from Leslie on his very cool array!) Then it was out to dinner with Leslie and his wonderful wife Lisa, and finally back to their house where I stayed for the night, and finally hit the road towards home the next morning. It truly was a great adventure! For the Silo, Mike Fuchs.

E Equals MC Squared Might Be Wrong

Theory of Intrinsic Energy by Donald H. MacAdam

Abstract

Gravitational action at a distance is non-Newtonian and independent of mass, but is proportional to intrinsic energy, distance, and time. Electrical action at a distance is proportional to intrinsic energy, distance, and time.

The conventional assumption that all energy is kinetic and proportional to velocity and mass has resulted in an absence of mechanisms to explain important phenomena such as stellar rotation curves, mass increase with increase in velocity, constant photon velocity, and the levitation and suspension of superconducting disks.

In addition, there is no explanation for the existence of the fine structure constant, no explanation for the value of the proton-electron mass ratio, no method to derive the spectral series of atoms larger than hydrogen, and no definitive proof or disproof of cosmic inflation.

All of the above issues are resolved by the existence of intrinsic energy.

Table of contents

  • Part One “Gravitation and the fine structure constant” derives the fine structure constant, the proton-electron mass ratio, and the mechanisms of non-Newtonian gravitation including the precession rate of mercury’s perihelion and stellar rotation curves.
  • Part Two “Structure and chirality” describes the structure of particles and the chirality meshing interactions that mediate action at a distance between particles and gravitons (gravitation) and particles and quantons (electromagnetism) and describes the properties of photons (with the mechanism of diffraction and constant photon velocity).
  • Part Three “Nuclear magnetic resonance” is a general derivation of the gyromagnetic ratios and nuclear magnetic moments of isotopes.
  • Part Four “Particle acceleration” derives the mechanism for the increase in mass (and mass-energy) in particle acceleration.
  • Part Five “Atomic Spectra” reformulates the Rydberg equations for the spectral series of hydrogen, derives the spectral series of helium, lithium, beryllium, and boron, and explains the process to build a table of the spectral series for any elemental atom.
  • Part Six “Cosmology” disproves cosmic inflation.
  • Part Seven “Magnetic levitation and suspension” quantitatively explains the levitation of pyrolytic carbon, and the levitation, suspension and pinning of superconducting disks.

Part One

Gravitation and the fine structure constant

That gravity should be innate inherent & essential to matter so that one body may act upon another at a distance through a vacuum without the mediation of anything else by & through which their action or force may be conveyed from one to another is to me so great an absurdity that I believe no man who has … any competent faculty of thinking can ever fall into it.”1

Intrinsic energy is independent of mass and velocity. Intrinsic energy is the inherent energy of particles such as the proton and electron. Neutrons are composite particles composed of protons, electrons, and binding energy. Atoms, composed of protons, neutrons, and electrons, are the substance of larger three-dimensional physical entities, from molecules to galaxies.

Gravitation, electromagnetism, and other action at a distance phenomenon are mediated by gravitons, quantons and neutrinos. Gravitons, quantons and neutrinos are quanta that have a discrete amount of intrinsic energy and are emitted by particles in one direction at a time and absorbed by particles from one direction at a time. Emission-absorption events can be chirality meshing interactions that produce accelerations or achiral interactions that do not produce accelerations. Chirality meshing absorption of gravitons produces attractive accelerations, chirality meshing absorption of quantons produces either attractive or repulsive accelerations, and achiral absorption of neutrinos do not produce accelerations. The word neutrino is burdened with non-physical associations thus achiral quanta are henceforth called neutral flux.

A single chirality meshing interaction produces a deflection (a change in position), but a series of chirality meshing interactions produces acceleration (serial deflections). A single deflection in the direction of existing motion produces a small finite positive acceleration (and inertia) and a single deflection in the direction opposite existing motion produces a small finite negative acceleration (and inertia).

There are two fundamental differences between the mechanisms of Newtonian gravitation and discrete gravitation. The first is the Newtonian probability two particles will gravitationally interact is 100% but the discrete probability two particles will gravitationally interact is significantly less. The second difference is the treatment of force. In Newtonian physics a gravitational force between objects always exists, the force is infinitesimal and continuous, and the strength of the force is inversely proportional to the square of the separation distance. In discrete physics the existence of a gravitational force is dependent on the orientations of the particles of which objects are composed, the force is discrete and discontinuous, and the number of interactions is inversely proportional to the square of the separation distance. While there are considerable differences in mechanisms, in many phenomena the solutions of Newtonian and discrete gravitational equations are nearly identical.

There are similar fundamental differences between mechanisms of electromagnetic phenomena and in many cases the solutions of infinitesimal and discrete equations are nearly identical.

A particle emits gravitons and quantons at a rate proportional to particle intrinsic energy. A particle absorbs gravitons and quantons, subject to availability, at a maximum rate proportional to particle intrinsic energy. Each graviton or quanton emission event reduces the intrinsic energy of the particle and each graviton or quanton absorption event increases the intrinsic energy of the particle. Because graviton and quanton emission events continually occur but graviton and quanton absorption events are dependent on availability, these mechanisms collectively reduce the intrinsic energy of particles.

Only particles in nuclear reactions or undergoing radioactive disintegration emit neutral flux but in the solar system all particles absorb all available neutral flux.

In the solar system, discrete gravitational interactions mediate orbital phenomena and, for objects in a stable orbit the intrinsic energy loss due to the emission-absorption of gravitons is balanced by the absorption of intrinsic energy in the form of solar neutral flux.

Within the solar system, particle absorption of solar neutral flux (passing through a unit area of a spherical shell centered on the sun) adds intrinsic energy at a rate proportional to the inverse square of orbital distance, and over a relatively short period of time, the graviton, quanton, and neutral flux emission-absorption processes achieve Stable Balance resulting in constant intrinsic energy for particles of the same type at the same orbital distance, with particle intrinsic energies higher the closer to the sun and lower the further from the sun.

The process of Stable Balance is bidirectional.

If a high energy body consisting of high energy particles is captured by the solar gravitational field and enters into solar orbit at the orbital distance of earth, the higher particle intrinsic energies will result in an excess of intrinsic energy emissions compared to intrinsic energy absorptions at that orbital distance, and the intrinsic energy of the body will be reduced to bring it into Stable Balance.

If, on the other hand, a low energy body consisting of low energy particles is captured by the solar gravitational field and enters into solar orbit at the orbital distance of earth, the lower particle intrinsic energies will result in an excess of intrinsic energy absorptions at that orbital distance compared to the intrinsic energy emissions, and the intrinsic energy of the body will be increased to bring it into Stable Balance.

In an ideal two-body earth-sun system, a spherical and randomly symmetrical earth is in Stable Balance orbit about a spherical and randomly symmetrical sun. A randomly symmetrical body is composed of particles that collectively emit an equal intensity of gravitons (graviton flux) through a unit area on a spherical shell centered on the emitting body.

Unless otherwise stipulated, in this document references to the earth or sun assume they are part of an ideal two-body earth-sun system.

The gravitational intrinsic energy of earth is proportional to the gravitational intrinsic energy of the sun because total emissions of solar gravitons are proportional to the number of gravitons passing into or through earth as it continuously moves on a spherical shell centered on the sun (and also proportional to the volume of the spherical earth, to the cross-sectional area of the earth, to the diameter of the earth and to the radius of the earth).

Likewise, because the sun and the earth orbit about their mutual barycenter, the gravitational intrinsic energy of the sun is proportional to the gravitational intrinsic energy of the earth because total emissions of earthly gravitons are proportional to the number of gravitons passing into or through the sun as it continuously moves on a spherical shell centered on the earth (and also proportional to the volume of the spherical sun, to the cross-sectional area of the sun, to the diameter of the sun and to the radius of the sun).

We define the orbital distance of earth equal to 15E10 meters and note earth’s orbit in an ideal two-body system is circular. If additional planets are introduced, earth’s orbit will become elliptical and the diameter of earth’s former circular orbit will be equal to the semi-major axis of the elliptical orbit.

We define the intrinsic photon velocity c equal to 3E8 m/s and equal in amplitude to the intrinsic constant Theta which is non-denominated. We further define the elapsed time for a photon to travel 15E10 meters equal to 500 seconds.

The non-denominated intrinsic constant Psi, 1E-7, is equal in amplitude to the intrinsic magnetic constant denominated in units of Henry per meter.

Psi is also equal in amplitude to the 2014 CODATA vacuum magnetic permeability divided by 4 (after 2014 CODATA values for permittivity and permeability are defined and no longer reconciled to the speed of light); half the electromagnetic force (units of Newton) between two straight ideal (constant diameter and homogeneous composition) parallel conductors with center-to-center distance of one meter and each carrying a current of one Ampere; and to the intrinsic voltage of a magnetically induced minimum amplitude current loop (3E8 electrons per second).

The intrinsic electric constant, the inverse of the product of the intrinsic magnetic constant and the square of the intrinsic photon velocity, is equal to the inverse of 9E9 and denominated in units of Farad per meter.

The Newtonian mass of earth, denominated in units of kilogram, is equal to 6E24, and equal in amplitude to the active gravitational mass of earth, denominated in units of Einstein (the unit of intrinsic energy).

The active gravitational mass is proportional to the number of gravitons emitted and the Newtonian mass is proportional to the number of gravitons absorbed. Every graviton absorbed contributes to the acceleration and inertia of the absorber, therefore the Newtonian mass is also the inertial mass.


We define the radius of earth, the square root of the ratio of the Newtonian inertial mass of earth divided by orbital distance, or the square root of the ratio of the active gravitational mass of earth divided by its orbital distance, equal to the square root of 4E13, 6.325E6, about 0.993 the NASA volumetric radius of 6.371E6. Our somewhat smaller earth has a slightly higher density and a local gravitational constant equal to 10 m/s2 at any point on its perfectly spherical surface.

We define the Gravitational constant at the orbital distance of earth, the ratio of the local gravitational constant of earth divided by its orbital distance, equal to the inverse of 15E9.

The unit kilogram is equal to the mass of 6E26 protons at the orbital distance of earth, and the proton mass equal to the inverse of 6E26.

The proton intrinsic energy at the orbital distance of earth is equal to the inverse of the product of the proton mass and the mass-energy factor delta (equal to 100). Within the solar system, the proton intrinsic energy increases at orbital distances closer to the sun and decreases at orbital distances further from the sun. Changes in proton intrinsic energy are proportional to the inverse square of orbital distance.

The Newtonian mass of the sun, denominated in units of kilogram, is equal to 2E30, and equal in amplitude to the active gravitational mass of the sun, denominated in units of Einstein.

The active gravitational mass is proportional to the number of gravitons emitted and the Newtonian mass is proportional to the number of gravitons absorbed. Every graviton absorbed contributes to the acceleration and inertia of the absorber, therefore the Newtonian mass is also the inertial mass.

The active gravitational mass of earth divided by the active gravitational mass of the sun is equal to the intrinsic constant Beta-square and its square root is equal to the intrinsic constant Beta.

The charge intrinsic energy ei, denominated in units of intrinsic Volt, is proportional to the number of quantons emitted by an electron or proton. The charge intrinsic energy is equal to Beta divided by Theta-square, the inverse of the square root of 27E38.

Intrinsic voltage does not dissipate kinetic energy.

The electron intrinsic energy Ee, equal to the ratio of Beta-square divided by Theta-cube, the ratio of Psi-square divided by Theta-square, the product of the square of the charge intrinsic energy and Theta, and the ratio of the intrinsic electron magnetic flux quantum divided by the intrinsic Josephson constant, is denominated in units of Einstein.

The intrinsic electron magnetic flux quantum, equal to the square root of the electron intrinsic energy, is denominated in units of intrinsic Volt second.

The intrinsic Josephson constant, equal to the inverse of the square root of the electron intrinsic energy, the ratio of Theta divided by Psi and the ratio of the photon velocity divided by the intrinsic sustaining voltage of a minimum amplitude superconducting current, is denominated in units of Hertz per intrinsic Volt.

The discrete (dissipative kinetic) electron magnetic flux quantum, equal to the product of 2π and the intrinsic electron magnetic flux quantum, is denominated in units of discrete Volt second, and the discrete rotational Josephson constant, equal to the intrinsic Josephson constant divided by 2π and the inverse of the discrete electron magnetic flux quantum, is denominated in units of Hertz per discrete Volt. These constants are expressions of rotational frequencies.

We define the electron amplitude equal to 1. The proton amplitude is equal to the ratio of the proton intrinsic energy divided by the electron intrinsic energy.

We define the Coulomb, ec, equal to the product of the charge intrinsic energy and the square root of the proton amplitude divided by two. The Coulomb denominates dissipative current.

We define the Faraday equal to 1E5, and the Avogadro constant equal to the Faraday divided by the Coulomb.

Lambda-bar, the quantum of particle intrinsic energy, equal to the intrinsic energy content of a graviton or quanton, is the ratio of the product of Psi and Beta divided by Theta-cube, the ratio of Psi-cube divided by the product of Beta and Theta-square, the product of the charge intrinsic energy and the intrinsic electron magnetic flux quantum, and the charge intrinsic energy divided by the intrinsic Josephson constant.

CODATA physical constants that are defined as exact have an uncertainty of 10-12 decimal places therefore the exactness of Newtonian infinitesimal calculations is of a similar order of magnitude. We assert that Lambda-bar and proportional physical constants are discretely exact (equivalent to Newtonian infinitesimal calculations) because discretely exact physical properties can be exactly expressed to greater accuracy than can be measured in the laboratory.

All intrinsic physical constants and intrinsic properties are discretely rational. The ratio of two positive integers is a discretely rational number.

  • The ratio of two discretely rational numbers is discretely rational.
  • The rational power or rational root of a discretely rational number is discretely rational.
  • The difference or sum of discretely rational numbers is discretely rational. This property is important in the derivation of atomic spectra where it serves the same purpose as a Fourier transform in infinitesimal mathematics.

The intrinsic electron gyromagnetic ratio, equal to the ratio of the cube of the charge intrinsic energy divided by Lambda-bar square, is denominated in units of Hertz per Tesla.

The intrinsic proton gyromagnetic ratio, equal to the ratio the intrinsic electron gyromagnetic ratio divided by the square root of the cube of the proton amplitude divided by two and the ratio of eight times the photon velocity divided by nine, is denominated in units of Hertz per Tesla.

The intrinsic conductance quantum, equal to the product of the intrinsic Josephson constant and the discrete Coulomb, is denominated in units of intrinsic Siemen.

The kinetic conductance quantum, equal to the intrinsic conductance quantum divided by 2π, is denominated in units of kinetic Siemen.

The CODATA conductance quantum is equal to 7.748091E-5.

The intrinsic resistance quantum, equal to the inverse of the intrinsic conductance quantum, is denominated in units of Ohm.

The kinetic resistance quantum, equal to the inverse of the kinetic conductance quantum, is denominated in units of Ohm.

The CODATA resistance quantum is equal to 1.290640E4.

The intrinsic von Klitzing constant, equal to the ratio of the discrete Planck constant divided by the square of the intrinsic electric constant, is denominated in units of Ohm.

The kinetic von Klitzing constant, equal to the ratio of the discrete Planck constant divided by the square of the discrete Coulomb, is denominated in units of Ohm.

The CODATA von Klitzing constant is equal to 2.581280745E4.

In Newtonian physics the probability particles at a distance will interact is 100% but in discrete physics a certain granularity is needed for interactions to occur.

A particle G-axis is a single-ended hollow cylinder. The mechanism of the G-axis is analogous to a piston which moves up and down at a frequency proportional to particle intrinsic energy. At the end of the up-stroke a single graviton is emitted and during a down-stroke the absorption window is open until the end of the downstroke or the absorption of a single graviton.

The difference (the intrinsic granularity) between the inside diameter of the hollow cylindrical G-axis and the outside diameter of the graviton allows absorption of incoming gravitons at angles that can deviate from normal (straight down the center) by plus or minus 20 arcseconds.

There are three kinds of intrinsic granularity: the intrinsic granularity in phenomena mediated by the absorption of gravitons and quantons; the intrinsic granularity in phenomena mediated by the emission of gravitons and quantons; and the intrinsic granularity in certain electromagnetic phenomena.

  • The intrinsic granularity in phenomena mediated by the absorption of gravitons or quantons by particles in tangible objects (with kilogram mass greater than one microgram or 1E20 particles) is discretely infinite therefore the average value of 20 arcseconds is discretely exact.
  • The intrinsic granularity in phenomena mediated by the emission of gravitons or quantons by particles is 20 arcseconds because gravitons and quantons emitted in the direction in which the emitting axis is pointing have an intrinsic granularity of not more than plus or minus 10 arcseconds.
  • The intrinsic granularity of certain electromagnetic phenomena, in particular a Faraday disk generator, governed by a “Lorentz force” that causes the velocity of an electron to be at a right angle to the force also causes an additional directional change of 20 arcseconds in the azimuthal direction.

In the above diagram, the intrinsic granularity of graviton absorption is illustrated on the left.

Above center illustrates the aberration between the visible and the actual positions of the sun with respect to an observer on earth as the sun moves across the sky. Position A is the visible position of the sun, position B is the actual position of the sun, position B will be the visible position of the sun in 500 seconds, and position C will be the actual position of the sun in 500 seconds. The elapsed time between successive positions is proportional to the separation distance, but 20 arcseconds of aberration is independent of separation distance.

Above right illustrates the six directions within a Cartesian space and the six possible forms describing the six possible facing directions in which a vector can point. A vector pointing up the G-axis of particle A in the facing direction of particle B has one and only one of the six possible forms. The probability a gravitational interaction will occur, if the vector is facing in one of the other five facing directions, is zero. Therefore, a gravitational interaction involving a graviton emitted by a specific particle A and absorbed by a specific particle B is possible (not probable) in only one-sixth the total volume of Cartesian space.

We define the intrinsic steric factor equal to 6. The intrinsic steric factor is inversely proportional to the probability a specific gravitational intrinsic energy interaction can occur on a scale where the probability a Newtonian gravitational interaction will occur is 100%.

The intrinsic steric factor points outward from a specific particle located at the origin of a Cartesian space facing outward into the surrounding space. The intrinsic steric factor applies to action at a distance in phenomena mediated by gravitons and quantons.

To convert 20 arcseconds of intrinsic granularity into an inverse possibility, divide the 1,296,000 arcseconds in 360 degrees by the product of 20 arcseconds and the intrinsic steric factor.

A possibility is not the same as a probability. The possibility two particles can gravitationally interact (each with the other) is equal to 1 out of 10,800. The probability two particles will gravitationally interact (each with the other) is dependent on the geometry of the interaction.

Because Newtonian gravitational interactions are proportional to the quantum of kinetic energy, the discrete Planck constant, and discrete gravitational interactions are proportional to the quantum of intrinsic energy, Lambda-bar, the factor 10,800 is a conversion factor.

In a bidirectional gravitational interaction, the ratio of the square of the discrete Planck constant divided by the square of Lambda-bar is equal to 10,800.

In a one-directional gravitational interaction the ratio of the discrete Planck constant divided by Lambda-bar is equal to the square root of 10,800.

The discrete Planck constant is equal to Lambda-bar times the square root of 10,800 and denominated in units of Joule second.

The value of the discrete Planck constant, approximately 1.006 times larger than the 2018 CODATA value, is the correct value for the two-body earth-sun system and proportional to the intrinsic physical constants previously defined.

The CODATA fine structure constant alpha is equal to the ratio of the square of the CODATA electron charge divided by the product of two times the CODATA Planck constant, the CODATA vacuum permittivity and the CODATA speed of light (2018 CODATA values).

The intrinsic constant Beta is a transformation of the CODATA expression.

By substitution of the charge intrinsic energy for the CODATA electron charge, Lambda-bar for two times the CODATA Planck constant, the intrinsic electric constant for the CODATA vacuum permittivity and the intrinsic photon velocity for the CODATA speed of light, the dimensionless CODATA fine structure constant alpha is transformed into the dimensionless intrinsic constant Beta.

The existence of the fine structure constant and its ubiquitous appearance in seemingly unrelated equations is due to the assumption that phenomena are governed by kinetic energy, consequently measured values of phenomena governed or partly governed by intrinsic energy do not agree with the theoretical expectations.

A gravitational phenomenon governed by intrinsic energy is the solar system Kepler constant equal to the square root of the cube of the planet’s orbital distance divided by 4π-square times the orbital period of the planet, the product of the active gravitational mass of the sun and the Gravitational constant at the orbital distance of earth divided by 4π-square, and the ratio of the product of the square of the planet’s velocity and the orbital distance of the planet divided by 4π-square.

The intrinsic constant Beta-square, previously shown to be the ratio of the active gravitational mass of earth divided by the active gravitational mass of the sun, is also proportional to the key orbital properties of the sun, earth, and moon.

An electromagnetic phenomenon governed by intrinsic energy is the proton-electron mass ratio, here termed the electron-proton deflection ratio, equal to the square root of the cube of the proton intrinsic energy divided by the cube of the electron intrinsic energy, and to the square root of the cube of the proton amplitude divided by the cube of the unit electron amplitude.

The CODATA proton-electron mass ratio is a measure of electron deflection (1836.15267344) in units of proton deflection (equal to 1). Because the directions of proton and electron deflections are opposite, the electron-proton deflection ratio is approximately equal to the CODATA proton-electron mass ratio plus one.

In this document, unless otherwise specified (as in CODATA constants denominated in units of Joule proportional to the CODATA Planck constant), units of Joule are proportional to the discrete Planck constant.

The ratio of the discrete Planck constant divided by Lambda-bar, equal to the product of the mass-energy factor delta and omega-2, is denominated in units of discrete Joule per Einstein.

In the above equation the denomination discrete Joule represents energy proportional to the discrete Planck constant and the denomination Einstein represents energy proportional to Lambda-bar. The mass-energy factor delta converts non-collisional energy (action at a distance) into collisional energy in units of intrinsic Joule. The factor omega-2 converts units of intrinsic Joule into units of discrete Joule.

Omega factors correspond to the geometry of graviton-mediated and quanton-mediated phenomena.

We will begin with a brief discussion of electrical (quanton-mediated) phenomena then exclusively focus on gravitational phenomena for the remainder of Part One.

Electrical phenomena

The discrete steric factor, equal to 8, is the number of octants defined by the orthogonal planes of a Cartesian space.

Each octant is one of eight signed triplets (—, -+-, -++, –+, +++, +-+, +–, ++-) which correspond to the direction of the x, y, and z Cartesian axes.

A large number of random molecules, each with a velocity coincident with its center of mass, are within a Cartesian space. If the origin is the center of mass of specific molecule1, then random molecule2 is within one of the eight signed octants and, because the same number of random molecules are within each octant, then the specific molecule1 is within one of the eight signed octants with respect to random molecule2, and the possibility (not probability) of a center of mass collisional interaction between molecule2 and molecule1 is equal to the inverse of the discrete steric factor (one in eight).

The discrete and intrinsic steric factors correspond to the geometries of phenomena governed by discrete kinetic energy (proportional to the discrete Planck constant) and to phenomena governed by intrinsic energy:

  • The discrete steric factor points inward from a random molecule in the direction of a specific molecule and applies to phenomena mediated by collisional interactions.
  • The intrinsic steric factor points outward from a specific particle into the surrounding space and applies to phenomena mediated by gravitons and quantons (action at a distance).

The intrinsic molar gas constant, equal to the discrete steric factor, is the intrinsic energy (units of intrinsic Joule) divided by mole Kelvin.

The discrete molar gas constant, equal to the product of the intrinsic molar gas constant and omega-2, is the intrinsic energy (units of discrete Joule) divided by mole Kelvin. The discrete molar gas constant agrees with the CODATA value within 1 part in 13,000.

The ratio of the CODATA electron charge (the elementary charge in units of Coulomb) divided by the charge intrinsic energy (in units of intrinsic Volt) is nearly equal to the discrete molar gas constant.

The intrinsic Boltzmann constant, equal to the ratio of the intrinsic molar gas constant divided by the Avogadro constant, is denominated in units of Einstein per Kelvin.

The discrete Boltzmann constant, equal to the product of omega-2 and the intrinsic Boltzmann constant, and the ratio of the discrete molar gas constant divided by the Avogadro constant, is denominated in units of discrete Joule per Kelvin. The CODATA Boltzmann constant is equal to 1.380649×10-23.

Gravitational phenomena

Omega-2, the square root of 1.08, corresponds to one-directional gravitational interactions between non-orbiting objects (objects not by themselves in orbit, that is, the object might be part of an orbiting body but the object itself is not the orbiting body), for example graviton emission by the large lead balls or absorption by the small lead balls in the Cavendish experiment.

Omega-4, 1.08, corresponds to two-directional gravitational interactions (emission and absorption) between non-orbiting objects, for example the acceleration of the large lead balls or the acceleration of the small lead balls in the Cavendish experiment.

Omega-6, the square root of the cube of 1.08, corresponds to gravitational interactions between a planet and moon in a Keplerian orbit where the square root of the cube of the orbital distance divided by the orbital period is equal to a constant.

Omega-8, the square of 1.08, corresponds to four-directional gravitational interactions by non-orbiting objects, for example the acceleration of the small lead balls and the acceleration of the large lead balls in the Cavendish experiment.

Omega-12, equal to the cube of 1.08, corresponds to gravitational interactions between two objects in orbit about each other, for example the sun and a planet in orbit about their mutual barycenter.

Except where previously defined (the Gravitational constant at the orbital distance of earth, the orbital distance of earth, the mass and volumetric radius of earth, the mass of the sun) the following equations use the NASA2 values for the Newtonian masses, orbital distances, and volumetric radii of the planets.

The local gravitational constant for any of the planets is equal to the product of the Gravitational constant of earth and the Newtonian mass (kilogram mass) of the planet divided by the square of the volumetric radius of the planet.

The v2d value of a planetary moon is equal to the product of the Gravitational constant at the orbital distance of earth and the Newtonian mass of the planet.

The active gravitational mass of a planet, denominated in units of Einstein, is equal to the product of the square of the volumetric radius of the planet and the orbital distance of the planet, divided by the square of the orbital distance of the planet in units of the orbital distance of earth.

The mass of a planet in a Newtonian orbit about the sun (the planet and sun orbit about their mutual barycenter) is a kinetic property. The active gravitational mass of such a planet, denominated in units of Joule, is equal to the product of the active gravitational mass of the planet in units of Einstein and omega-12.

The Gravitational constant at the orbital distance of the planet is equal to the product of the local gravitational constant of the planet and the square of the volumetric radius of the planet, divided by the active gravitational mass of the planet.

The v2d value of a planetary moon is equal to the product of the Gravitational constant at the orbital distance of the planet and the active gravitational mass of the planet.

The v2d values calculated using the NASA orbital parameters for the moon is larger than the above calculated values by 1.00374; the v2d calculations using the NASA orbital parameters for the major Jovian moons (Io, Europa, Ganymede and Callisto) are larger than the above calculated values by 1.0020, 1.0016, 1.00131, and 1.00133.

Newtonian gravitational calculations are extremely accurate for most gravitational phenomena but there are a number of anomalies for which the Newtonian calculations are inaccurate. The first of these anomalies to come to the attention of scientists in 1859 was the precession rate of the perihelion of mercury for which the observed rate was about 43 arcseconds per century larger than the Newtonian calculated rate.3

According to Gerald Clemence, one of the twentieth century’s leading authorities on the subject of planetary orbital calculations, the most accurate method for calculating planetary orbits, the method of Gauss, was derived for calculating planetary orbits within the solar system with distance expressed in astronomical units, orbital period in days and mass in solar masses.4

The Gaussian method was used by Eric Doolittle in what Clemence believed to be the most reliable theoretical calculation of the perihelion precession rate of mercury.5

With modifications by Clemence including newer values for planetary masses, newer measurements of the precession of the equinoxes and a careful analysis of the error terms, the calculated rate was determined to be 531.534 arc-seconds per century compared to the observed rate of 574.095 arc-seconds per century, leaving an unaccounted deficit of 42.561 arcseconds per century.

The below calculations are based on the method of Price and Rush.6 This method determines a Newtonian rate of precession due to the gravitational influences on mercury by the sun and five outer planets external to the orbit of mercury (venus, earth, mars, jupiter and saturn) The solar and planetary masses are treated as Newtonian objects and in calculations of planetary gravitational influences the outer planets are treated as circular mass rings.

The Newtonian gravitational force on mercury due to the mass of the sun is equal to ratio of the product of the negative Gravitational constant at the orbital distance of earth, the mass of the sun and the mass of mercury divided by the square of the orbital distance of mercury.

The Newtonian gravitational force on mercury due to the mass of the five outer planets is equal to the sum of the gravitational force contributions of the five outer planets external to the orbit of mercury. The gravitational force contribution of each planet is equal to the ratio of the product of the Gravitational constant at the orbital distance of earth, the mass of the planet, the mass of mercury and the orbital distance of mercury, divided by the ratio of the product of twice the planet’s orbital distance and the difference between the square of the planet’s orbital distance and the square of the orbital distance of mercury.

The gravitational force ratio is equal to the gravitational force on mercury due to the mass of the five outer planets external to the orbit of mercury divided by the gravitational force on mercury due to the mass of the sun.

The gamma factor is equal to the sum of the gamma contributions of the five outer planets external to the orbit of mercury. The gamma contribution of each planet is equal to the ratio of the product of the mass of the planet, the orbital distance of mercury, and the sum of the square of the planet’s orbital distance and the square of the orbital distance of mercury, divided by the product of 2π, the planet’s orbital distance and the square of the difference between the square of the planet’s orbital distance and the square of the orbital distance of mercury.

Psi-mercury is equal to the product of π and the sum of one plus the difference between the negative of the gravitational force ratio and the ratio of the product of the Gravitational constant at the orbital distance of earth, π, the mass of mercury and the gamma factor divided by twice the gravitational force on mercury due to the mass of the sun.

The number of arc-seconds in one revolution is equal to 360 degrees times sixty minutes times sixty seconds.

The number of days in a Julian century is equal to 100 times the length of a Julian year in days.

The perihelion precession rate of mercury is equal to the ratio of the product of the difference between 2ψ-mercury and 2π, the number of arc-seconds in one revolution and the number of days in a Julian century, divided by the product of 2π and the NASA sidereal orbital period of mercury in units of day (87.969).

The Newtonian perihelion precession rate of mercury determined above is 0.139 arc-seconds per century less than the Clemence calculated rate of 531.534 arc-seconds per century.

The following equations, the same format as the Newtonian equations, derive the non-Newtonian values (when different).

The Newtonian gravitational force on mercury due to the mass of the sun is equal to ratio of the product of the negative Gravitational constant at the orbital distance of earth, the mass of the sun and the mass of mercury divided by the square of the orbital distance of mercury.

The non-Newtonian gravitational force on mercury due to the mass of the five outer planets is equal to the sum of the gravitational force contributions of the five outer planets external to the orbit of mercury. The gravitational force contribution of each planet is equal to the product of the ratio of the product of the Gravitational constant at the orbital distance of earth, the active gravitational mass (in units of Joule) of the planet, the Newtonian mass of mercury and the orbital distance of mercury, divided by the ratio of the product of twice the planet’s orbital distance and the difference between the square of the planet’s orbital distance and the square of the orbital distance of mercury.

The non-Newtonian gravitational force ratio is equal to the gravitational force on mercury due to the mass of the five outer planets external to the orbit of mercury divided by the gravitational force on mercury due to the mass of the sun.

The gamma factor is equal to the sum of the gamma contributions of the five outer planets external to the orbit of mercury. The gamma contribution of each planet is equal to the ratio of the product of the mass of the planet, the orbital distance of mercury, and the sum of the square of the planet’s orbital distance and the square of the orbital distance of mercury, divided by the product of 2π, the planet’s orbital distance and the square of the difference between the square of the planet’s orbital distance and the square of the orbital distance of mercury.

The non-Newtonian value for Psi-mercury is equal to the product of π and the sum of one plus the difference between the negative of the gravitational force ratio and the ratio of the product of the Gravitational constant at the orbital distance of earth, π, the mass of mercury and the gamma factor divided by twice the gravitational force on mercury due to the mass of the sun.

The non-Newtonian perihelion precession rate of mercury is equal to the ratio of the product of the difference between 2ψ-mercury and 2π, the number of arc-seconds in one revolution and the number of days in a Julian century, divided by the product of 2π and the NASA sidereal orbital period of mercury in units of day (87.969).

The non-Newtonian perihelion precession rate of mercury is 6.128 arc-seconds per century greater than the Clemence observed rate of 574.095 arc-seconds per century.

We have built a model of gravitation proportional to the dimensions of the earth-sun system. A different model, with different values for the physical constants, would be equally valid if it were proportional to the dimensions of a different planet in our solar system or a planet in some other star system in our galaxy.

Our sun and the stars in our galaxy, in addition to graviton flux, emit large quantities of neutral flux that establish Stable Balance orbits for planets that emit relatively small quantities of neutral flux.

Our galactic center emits huge quantities of gravitons and neutral flux, and its dimensional relationship with our sun is dependent on the neutral flux emissions of our sun. If the intrinsic energy of our sun was less, its orbit would be further out from the galactic center, and if it was greater, its orbit would be closer in.

  • Of two stars at the same distance from the galactic center with different velocities, the star with higher velocity has a higher graviton absorption rate (higher stellar internal energy) and the star with lower velocity has a lower graviton absorption rate (lower stellar internal energy).
  • Of two stars with the same velocity at different distances from the galactic center, the star closer in will have a higher graviton absorption rate (higher stellar internal energy) and the star further out will have a lower graviton absorption rate (lower stellar internal energy).

The active gravitational mass of the Galactic Center is equal to the active gravitational mass of the sun divided by Beta-fourth and the cube of the active gravitational mass of the sun divided by the square of the active gravitational mass of earth.

The second expression of the above equation, generalized and reformatted, asserts the square root of the cube of the active gravitational mass of any star in the Milky Way divided by the active gravitational mass of any planet in orbit about the star is equal to a constant.

The above equation, combined with the detailed explanation of the chirality meshing interactions that mediate gravitational action at a distance, the derivation of solar system non-Newtonian orbital parameters, the derivation of the non-Newtonian rate of precession of the perihelion of mercury, and the detailed explanation of non-Newtonian stellar rotation curves, disproves the theory of dark matter.

Part Two

Structure and chirality

A particle has the property of chirality because its axes are orthogonal and directed, pointing in three perpendicular directions and, like the fingers of a human hand, the directed axes are either left-handed (LH) or right-handed (RH). The electron and antiproton exhibit LH structural chirality and the proton and positron exhibit RH structural chirality. The two chiralities are mirror images.

The electron G-axis (black, index finger) points into the paper, the electron Q-axis (blue, thumb) points up in the plane of the paper, and the north pole of the electron P-axis (red, middle finger) points right in the plane of the paper.

The orientation of the axes of an RH proton are the mirror image: the proton G-axis (black, index finger) points into the paper, the proton Q-axis (blue, thumb) points up in the plane of the paper, and the north pole of the proton P-axis (red, middle finger) points left in the plane of the paper.

Above, to visualize orientations, models are easier to manipulate than human hands.

When Michael Faraday invented the disk generator in 1831, he discovered the conversion of rotational force, in the presence of a magnetic field, into electric current. The apparatus creates a magnetic field perpendicular to a hand-cranked rotating conductive disk and, providing the circuit is completed through a path external to the disk, produces an electric current flowing inward from axle to rim (electron flow not conventional current), photograph below.7

Above left, the electron Q-axis points in the CCW direction of motion. The inertial force within a rotating conductive disk aligns conduction electron G-axes to point in the direction of the rim. The alignment of the Q-axes and G-axes causes the orthogonal P-axes to point down.

Above right, the electron Q-axis points in the CW direction of motion. The inertial force within a rotating conductive disk aligns conduction electron G-axes to point in the direction of the rim. The alignment of the Q-axes and G-axes causes the orthogonal P-axes to point up.

In generally accepted physics (GAP), the transverse alignment of electron velocity with respect to magnetic field direction is attributed to the Lorentz force but, as explained above it is a consequence of electron chirality.

In addition to the transverse alignment of the electron direction with respect to the direction of the magnetic field, the electron experiences an additional directional change of 20 arcseconds in the azimuthal direction which causes the electron to spiral in the direction of the axle. Thus, in both a CCW rotating conductive disk and a CW rotating conductive disk, the current (electron flow not conventional current) flows from the axle to the rim.

The geometries of the Faraday disk generator apply to the orientation of conduction electrons in the windings of solenoids and transformers. CCW and CW windings advance in the same direction, below into the plane of the paper. In contrast to the rotating conductor in the disk generator, the windings are stationary, and the conduction electrons spiral through in the direction of the positive voltage supply (which continually reverses in transformers and AC solenoids).

Above left, the electron Q-axes point down in the direction of current flow through the CCW winding. The inertial force on conduction electrons moving through the CCW winding aligns the direction of the electron G-axes to the left. The electron P-axes, perpendicular to both the Q-axes and P-axes, point S→N out of the paper.

Above right, the electron Q-axes point up in the direction of current flow through the CW winding. The inertial force on conduction electrons moving through the CW winding aligns the direction of the electron G-axes to the left. The electron P-axes, perpendicular to both the Q-axes and G-axes, point S→N into the paper.

Above is a turnbuckle composed of a metal frame tapped at each end. On the left end an LH bolt passes through an LH thread and on the right end an RH bolt passes through an RH thread. If the LH bolt is turned CCW (facing right into the turnbuckle frame) the bolt moves to the right and the frame moves to the left and if the LH bolt is turned CW the bolt moves to the left and the frame moves to the right. If the RH bolt is turned CW (facing left into the turnbuckle frame) the bolt moves to the left and the frame moves to the right and if the RH bolt is turned CCW the bolt moves to the right and the frame moves to the left.

In the language of this analogy, a graviton or quanton emitted by the emitting particle is a moving spinning bolt, and the absorbing particle is a turnbuckle frame with a G-axis, Q-axis or P-axis passing through.

In a chirality meshing interaction, absorption of a graviton or quanton by the LH or RH G-axis, Q-axis or P-axis of a particle, causes an attractive or repulsive acceleration proportional to the difference between the graviton or quanton velocity and the velocity of the absorbing particle.

An electron G-axis has a RH inside thread and a proton G-axis has a LH inside thread. An electron G-axis emits CW gravitons and a proton G-axis emits CCW gravitons.

In the bolt-turnbuckle analogy, a graviton is a moving spinning bolt, and the absorbing particle through which the G-axis passes is a turnbuckle frame:

  • If a CCW graviton emitted by a proton is absorbed into a proton LH G-axis, the absorbing proton is attracted, accelerated in the direction of the emitting proton.
  • If a CW graviton emitted by an electron is absorbed into an electron RH G-axis, the absorbing electron is attracted, accelerated in the direction of the emitting electron.

Protons and electrons do not gravitationally interact with each other because a proton is larger than an electron, a graviton emitted by a proton is larger than a graviton emitted by an electron, the inside thread of a proton G-axis is larger than the inside thread of an electron G-axis, and the size differences prevent the ability of a graviton emitted by an electron to mesh with a proton G-axis or a graviton emitted by a proton to mesh with an electron G-axis.

Tangible objects are composed of atoms which are composed of protons, electrons and neutrons.

In gravitational interactions between tangible objects (with kilogram mass greater than one microgram or 1E20 particles) the total intensity of the interaction is the sum of the contributions of the electrons and protons of which the object is composed (note that neutrons themselves do not gravitationally interact but each neutron is composed of one electron and one proton both of which do gravitationally interact).

A particle Q-axis is a single-ended hollow cylinder. The mechanism of the Q-axis is analogous to a piston which moves up and down at a frequency proportional to charge intrinsic energy. At the end of each up-stroke a single quanton is emitted. The absorption window opens at the beginning of the up-stroke and remains open until the beginning of the downstroke or the absorption of a single quanton.

The difference (the intrinsic granularity) between the inside diameter of the hollow cylindrical Q-axis and the outside diameter of the quanton allows absorption of incoming quantons at angles that can deviate from normal (straight down the center) by plus or minus 20 arcseconds.

An electron Q-axis has a RH inside thread and a proton Q-axis has a LH inside thread. An electron Q-axis emits CCW quantons and a proton Q-axis emits CW quantons.

In the bolt-turnbuckle analogy, a quanton is a moving spinning bolt, and the absorbing particle through which the G-axis passes is a turnbuckle frame:

  • If a CCW p-quanton emitted by a proton is absorbed into an electron RH Q-axis, the absorbing electron is attracted, accelerated in the direction of the emitting proton.
  • If a CCW p-quanton emitted by a proton (or the anode plate in a CRT) is absorbed into a proton LH Q-axis, the absorbing proton is repulsed, accelerated in the direction of the cathode plate (opposite the direction of the emitting proton).
  • If a CW e-quanton emitted by an electron is absorbed into an electron RH Q-axis, the absorbing electron is repulsed, accelerated in the direction opposite the emitting electron.
  • If a CW e-quanton emitted by an electron (or the cathode plate in a CRT) is absorbed into a proton LH Q-axis, the absorbing proton is repulsed, accelerated in the direction of the cathode plate (the direction opposite the emitting electron).

In a CRT, the Q-axis of an accelerated electron is oriented in the linear direction of travel and its P-G-axis are oriented transverse to the linear direction of travel. After the electron is linearly accelerated, the electron passes between oppositely charged parallel plates that emit quantons perpendicular to the linear direction of travel and these e-quantons are absorbed into the electron P-axes. The chirality meshing interactions between an electron with a linear direction of travel and a quantons emitted by either plate results in a transverse acceleration in the direction of the anode plate:

  • An incoming CCW p-quanton approaching an electron RH P-axis within less than 20 arcseconds deviation from normal (straight down the center) is absorbed in an attractive chirality meshing interaction in which the electron is deflected in the direction of the anode plate.
  • An incoming CW e-quanton approaching an electron RH P-axis within less than 20 arcseconds deviation from normal (straight down the center) is absorbed in a repulsive chirality meshing interaction in which the electron is deflected in the direction of the anode plate.

This is the mechanism of the experimental determination of the electron-proton deflection ratio.

The magnitude of the ratio between these masses is not equal to the ratio of the measured gravitational deflections but rather to the inverse of the ratio of the measured electric deflections. It would not matter which of these measurable quantities were used in the experimental determination if Newton’s laws of motion applied. However, in order for Newton’s laws to apply the assumptions behind Newtons laws, specifically the 100% probability that particles gravitationally and electrically interact, must also apply. But this is not the case for action at a distance.

The electron orientation below top left, rotated 90 degrees CCW, is identical to the electron orientations previously illustrated for a CW disk generator or a CW-wound transformer or solenoid; and the electron orientation bottom left is a 180 degree rotation of top left.

Above are reversals in Q-axis orientation due to reversals in direction of incoming quantons

Above top right and bottom right are the left-side electron orientations with the electron Q-axis directed into the plane of the paper (confirmation of the perspective transformation is easier to visualize with a model). These are the orientations of conduction electrons in an AC current.

In the top row CW quantons, emitted by the positive voltage source are absorbed in chirality meshing interactions by the electron RH Q-axis, attracting the absorbing electron. In the bottom row CCW quantons, emitted by the negative voltage source are absorbed in chirality meshing interactions into the electron RH Q-axis repelling the absorbing electron.

In either case the direction of current is into the paper.

In an AC current, a reversal in the direction of current is also a reversal in the rotational chirality of the quantons mediating the current.

  • In a current moving in the direction of a positive voltage source each linear chirality meshing absorption of a CW p-quanton into an electron RH Q-axis results in an attractive deflection.
  • In a current moving in the direction of a negative voltage source each linear chirality meshing absorption of a CCW e-quanton into an electron RH Q-axis results in a repulsive deflection.

In an AC current, each reversal in the direction of current, reverses the direction of the Q-axes of the conduction electrons. This reversal in direction is due to a complex rotation (two simultaneous 180 degree rotations) that results in photon emission.

During a shorter or longer period of time (the inverse of the AC frequency) during which the direction of current reverses, a shorter or longer inductive pulse of electromagnetic energy flows into the electron Q and P axes and the quantons of which the electromagnetic energy is composed are absorbed in rotational chirality meshing interactions.

Above left, the electron P and Q axes mesh together at their mutual orthogonal origin in a mechanism analogous to a right angle bevel gear linkage.8

Above center and right, an incoming CCW quanton induces an inward CCW rotation in the Q-axis and causes a CW outward (CCW inward) rotation of the P-axis. The rotation of the Q-axis reverses the orientation of the P-axis and G-axis, and the rotation of the P-axis reverses the orientation of the Q-axis and the orientation of the G-axis thereby restoring its orientation to the initial direction pointing left and perpendicular to a tangent to the cylindrical wire.

Above center and right, an incoming CW quanton induces an inward CW rotation in the Q-axis and causes a CCW outward (CW inward) rotation of the P-axis. The rotation of the Q-axis reverses the orientation of the P-axis and G-axis, and the rotation of the P-axis reverses the orientation of the Q-axis and the orientation of the G-axis thereby restoring its orientation to the initial direction pointing left and perpendicular to a tangent to the cylindrical wire.

In either case the electron orientations are identical, but CCW electron rotations cause the emission of CCW photons and CW electron rotations cause the emission of CW photons.

The absorption of CCW e-quantons by the Q-axis rotates the Q-axis CCW by the square root of 648,000 arcseconds (180 degrees) and the P-Q axis linkage simultaneously rotates the P-axis CW by the square root of 648,000 arcseconds (180 degrees).

If the orientation of the electron G-axis is into the paper in a plane defined by the direction of the Q-axis, the CCW rotation of the Q-axis tilts the plane of the G-axis down by the square root of 648,000 arcseconds and the CW rotation of the P-axis tilts the plane of the G-axis to the right by the square root of 648,000 arcseconds.

The net rotation of the electron G-axis is equal to the product of the square root of 648,000 arcseconds and the square root of 648,000 arcseconds.

In the production of photons by an AC current, the photon wavelength and frequency are proportional to the current reversal time, and the photon energy is proportional to the voltage.

Above, an axial projection of the helical path of a photon traces the circumference of a circle and the sine and cosine are transverse orthogonal projections.9 The crest to crest distance of the transverse orthogonal projections, or the distance between alternate crossings of the horizontal axis, is the photon wavelength.

The helical path of photons explains diffraction by a single slit, by a double slit, by an opaque circular disk, or a sphere (Arago spot).

In a beam of photons with velocity perpendicular to a flat screen or sensor, each individual photon makes a separate impact that can be sensed or is visible somewhere on the circumference of one of many separate and non-overlapping circles corresponding to all of the photons in the beam. The divergence of the beam increases the spacing between circles and the diameter of each individual photon circle which is proportional to the wavelength of each individual photon. The sensed or visible photon impacts form a region of constant intensity.

Below, the top image shows those photons, initially part of a photon beam illuminating a single slit, which passed through the single slit.10

Above, the bottom image shows those photons, initially part of a photon beam illuminating a double slit, that passed through a double slit.

Below, the image illustrating classical rays of light passing through a double slit is equally illustrative of a photon beam illuminating a double slit but, instead of constructive and destructive interference, the photons passing through the top slit diverge to the right and photons passing through the bottom slit diverge to the left. The spaces between divergent circles are dark and, due to coherence, the photon circles are brightest at the distance of maximum overlap, resulting in the characteristic double slit brighter-darker diffraction pattern.11

The mechanism of diffraction by an opaque circular disk or a sphere (Arago spot) is the same. In either case the opaque circular disk or sphere is illuminated by a photon beam of diameter larger than the diameter of the disk or sphere.

The photons passing close to the edge of the disk or sphere diverge inwards, and the spiraling helical path of a inwardly diverging CW photon passing one side of the disk will intersect in a head-on collision the spiraling helical path of a inwardly diverging CCW photon passing on the directly opposite side of the disk or sphere (if the opposite chirality photons are equidistant from the center of the disk or sphere).

In the case of a sphere illuminated by a laser, the surface of the sphere must be smooth and the ratio of the square of the diameter of the sphere divided by the product of the distance from the center of the sphere to the screen and the laser wavelength must be greater than one (similar to the Fresnel number).

Photon velocity

Constant photon velocity is due to a resonance driven by the emission of photon intrinsic energy which results in an increase in wavelength and a proportional decrease in frequency. In a related phenomenon, Arthur Holly Compton demonstrated Compton scattering in which the loss of photon kinetic energy does not change velocity but increases wavelength and proportionally decreases frequency.12

The mechanism of constant photon velocity is the emission of quantons and gravitons.

Below top, looking down into the plane of the paper a photon G-axis points in the direction of photon velocity and the P and Q-axes are orthogonal. In the language of the turnbuckle analogy, the mechanism of the photon P and Q-axes are analogous to pistons which move up and down or back and forth and emit a single quanton or graviton at the end of each stroke.

Above middle, in column A of the P-axis row, at the position of the oscillation the up-stroke has just completed, a single graviton has been emitted, and the current direction of the oscillation is now down. In column B of the P-axis row, the position of the oscillation is mid-way, and the direction of the oscillation is down. In column C of the P-axis row, at the position of the oscillation the downstroke has just completed, a single graviton has been emitted, and the current direction of the oscillation is up. In column D of the P-axis row, the position of the oscillation is mid-way, and the direction of the oscillation is up.

Above middle, in column A of the Q-axis row, the position of the oscillation is mid-way and the direction of oscillation is left. In column B of the Q-axis row, at the position of the oscillation the left-stroke has just completed, a single quanton has been emitted, and the current direction of the oscillation is right. In column C of the Q-axis row, the position of the oscillation is mid-way and the direction of the oscillation is right. In column D of the Q-axis row, at the position of the oscillation the right-stroke has just completed, a single quanton has been emitted, and the current direction of the oscillation is left.

Above left or right bottom, in each cycle of the photon frequency there are eight sequential CCW or CW alternating quanton/graviton emissions and the intrinsic energy of the photon is reduced by Lambda-bar on each emission.

This is the mechanism of intrinsic redshift.

Part Three

Nuclear magnetic resonance

In the 1922 Stern-Gerlach experiment, a molecular beam of identical silver atoms passed through an inhomogeneous magnetic field. Contrary to classical expectations, the beam of atoms did not diverge into a cone with intensity highest at the center and lowest at the outside. Instead, atoms near the center of the beam were deflected with half the silver atoms deposited on a glass slide in an upper zone and half deposited in a lower zone, illustrating “space quantization.”

The Stern-Gerlach experiment, designed to test directional quantization in a magnetic field as predicted by old quantum theory (the Bohr-Sommerfeld hypothesis)13, was conducted two years before intrinsic spin was conceived by Wolfgang Pauli and six years before Paul Dirac formalized the concept. Intrinsic spin became part of the foundation of new quantum theory.

The concept of intrinsic spin, where the property that causes the deflection of silver atoms in two opposite directions “space quantization” is inherent in the particle itself, is incorrect.

However, a molecular beam composed of atoms with magnetic moments passed through a Stern-Gerlach apparatus does exhibit the numerical property attributed to intrinsic spin but this property, interactional spin, is not inherent in the atom but is dependent on external factors.

The protons within a nucleus are the origin of spin, magnetic moment, Larmor frequency, and other nuclear gyromagnetic properties. A nucleus contains “ordinary protons” which, for clarity, will be termed Pprotons, and “protons within neutrons” will be termed Nprotons.

In nuclei with an even number of Pprotons, the Pproton magnetic flux is contained within the nucleus and does not contribute to the nuclear magnetic moment.

With neutrons the situation is quite different. A neutron is achiral: it is a composite particle composed of an Nproton-electron pair and binding energy, it has no G-axis therefore does not gravitationally interact, and no Q-axis therefore is electrically neutral.

Within a nucleus, a neutron does not have a magnetic moment (during its less than 15-minute mean lifetime after a neutron is emitted from its nucleus, a free neutron has a measurable magnetic moment, but there are no free neutrons within nuclei) but the Nproton and electron of which a neutron is composed do have magnetic moments.

The gyromagnetic properties of a nucleus, its magnetic moment, its spin, its Larmor frequency, and its gyromagnetic ratio are due to Pprotons and Nprotons.

A molecular beam (composed of nuclei, atoms and/or molecules) emerging from an oven into a vacuum will have a thermal distribution of velocities. Molecules within the beam are subject to collisions with faster or slower molecules that cause rotations and vibrations, and the orientations of unpaired Pprotons and unpaired Nprotons are constantly subject to change.

In a silver atom there is a single unpaired Pproton and the orientation of its P-axis, with respect to its direction of motion through an inhomogeneous magnetic field, will be either leading or trailing. Out of a large number of unpaired Pprotons, the P-axes will be leading 50% of the time and trailing 50% of the time, and a silver atom containing an unpaired Pproton with a leading P-axis can be deflected in the direction of the inhomogeneous magnetic north pole while a silver atom containing an unpaired Pproton with a trailing P-axis can be deflected in the direction of the south pole.

If the magnetic field is strong enough for a sufficient percentage of unpaired Pprotons (the orientation of which is constantly changing) to encounter within 20 arcseconds lines of magnetic flux and be deflected up or down, the molecular beam of silver atoms deposited on a glass slide at the center of the magnetic field (where it is strongest) will be split into two zones and, consistent with the definition of spin as equal to the difference between the number of zones minus one divided by 2 (S = (z-1)/2), a Stern-Gerlach experiment determines a spin equal to ½. This result is the only example of spin clearly determined by the position of atoms deposited on a glass slide.14

The above explanation is correct for silver atoms passed through the inhomogeneous magnetic fields of the Stern-Gerlach apparatus, but in the 1939 Rabi experimental apparatus15 (upon which modern molecular beam apparatus are modeled) the mechanism of deflection due to leading or trailing P-axes has nothing to do with the results achieved.

The 1939 Rabi experimental apparatus included back-to-back Stern-Gerlach inhomogeneous magnetic fields with opposite magnetic field orientations, but the result that dramatically changed physics, the accurate measurement of the Larmor frequency of nuclei, was done in a separate Rabi analyzer placed between the inhomogeneous magnetic fields. To Rabi, the importance of the Stern-Gerlach inhomogeneous magnets was for use in the alignment and tuning of the entire apparatus.

In a Rabi analyzer there is a strong constant magnetic field and a weaker transverse oscillating magnetic field. The purpose of the strong constant field is to decouple (increase the separation distance between) electrons and protons. The purpose of the transverse oscillating field is to stimulate the emission of photons by the decoupled protons.

When the Rabi apparatus is initially assembled, before installation of the Rabi analyzer the Stern-Gerlach apparatus is set up and tuned such that the intensity of the molecular beam leaving the apparatus is equal to its intensity upon entering.

After the unpowered Rabi analyzer is mounted between the Stern-Gerlach magnets, and the molecular beam exiting the first inhomogeneous magnetic field passes through the Rabi analyzer and enters the second inhomogeneous magnetic field, the intensity of the molecular beam leaving the apparatus decreases. In this state the entire Rabi apparatus is tuned and adjusted until the intensity of the entering molecular beam is equal to the intensity of the exiting beam.

When the crossed magnetic fields of the Rabi analyzer are switched on, for a second time the intensity of the exiting beam decreases. Then, by adjustment of the relative positions and orientations of the three magnetic fields (and also adjustment of the detector position to optimally align with decoupled protons in the nucleus of interest) the intensity of the exiting beam is returned to its initial value.

During an operational run, the transverse oscillating field stimulates the emission of photons at the same frequency as that of the transverse oscillating magnetic field. The ratio of the photon frequency divided by the strength of the strong magnetic field is equal to the Larmor frequency of the nucleus, and the Larmor frequency divided by the strong magnetic field strength is equal to the gyromagnetic ratio. The Larmor frequency has a very sharp resonant peak limited only by the accuracy of the two experimental measurables: the intensity of the strong magnetic field and the frequency of the oscillating weak magnetic field.

The gyromagnetic ratios of Li6, Li7, and F19, experimentally determined by Rabi in 1939, agree with the 2014 INDC16 values to better than 1 part in 60,000. Importantly, measurements of the gyromagnetic ratios of Li6 and Li7 were made in three different lithium molecules (LiCl, LiF, and Li2) requiring three separate operational runs, thereby demonstrating the Rabi analyzer was adjusted to optimally detect the nucleus of interest.

Modern determinations of spin are based on various types of spectroscopy, the results of which stand out as peaks in the collected data.

The magnetic flux of nuclei with an even number of Pprotons and Nprotons circulates in flux loops between pairs of Pprotons and pairs of Nprotons, and such nuclei do not have magnetic moments. The flux loops within nuclei with an odd number of Pprotons and/or Nprotons do have magnetic moments. In order for all nuclei of the same isotope to have zero or non-zero magnetic moments of the same amplitude, it is necessary for the magnetic flux loops to be circulating in the same plane.

All of the 106 selected magnetic nuclear isotopes from Lithium and Uranium, including all stable isotopes with atomic number (Z) greater than 2, plus a number of important isotopes with relatively long half-lives, belong to one of twelve different Types. The Type is determined based the spin of the isotope and the number of odd and even Pprotons and Nprotons.

An isotope contains an internal physical structure to which the property of magnetic moment correlates, but the magnetic moment is not entirely determined by the internal physical structure of a nucleus. The property of interactional spin is that portion of the magnetic moment due to factors external to the nucleus, including electromagnetic radiation, magnetic fields, electric fields and excitation energy.

Of significance to the present discussion, the detectable magnetic properties of 82 of the 106 selected isotopes (the relative spatial orientations of the flux loops associated with the Pprotons and Nprotons) can be manipulated by four different orientations of directed planar electric fields.

The magnetic signatures of the 106 selected isotopes can be sorted into twelve isotope Types with seven spin values.

Spin ½ isotopes with an odd number of Pprotons and even number of Nprotons are Type A-0. Of the 106 selected isotopes, 10 are Type A-0.

Spin ½ isotopes with an even number of Pprotons and odd number of Nprotons (odd/even Reversed) are Type RA-0. Of the 106 selected isotopes, 14 are Type RA-0.

Spin 1 isotopes with an odd number of Pprotons and an odd number of Nprotons are Type B-1. Of the 106 selected isotopes, 2 are Type B-1.

Spin 3/2 isotopes with an odd number of Pprotons and even number of Nprotons are Type C-1. Of the 106 selected isotopes, 18 are Type C-1.

Spin 3/2 isotopes with an even number of Pprotons and odd number of Nprotons are Type RC-1. Of the 106 selected isotopes, 12 are Type RC-1.

Spin 5/2 isotopes with an odd number of Pprotons and even number of Nprotons are Type C-2. Of the 106 selected isotopes, 13 are Type C-2.

Spin 5/2 isotopes with an even number of Pprotons and odd number of Nprotons are Type RC-2. Of the 106 selected isotopes, 11 are Type RC-2.

Spin 3 isotopes with an odd number of Pprotons and an odd number of Nprotons are Type B-3. Of the 106 selected isotopes, 2 are Type B-3.

Spin 7/2 isotopes with an odd number of Pprotons and even number of Nprotons are Type A-3.

Of the 106 selected isotopes, 9 are Type A-3.

Spin 7/2 isotopes with an even number of Pprotons and odd number of Nprotons are Type RA-3. Of the 106 selected isotopes, 8 are Type RA-3.

Spin 9/2 isotopes with an odd number of Pprotons and even number of Nprotons are Type C-4. Of the 106 selected isotopes, 3 are Type C-4.

Spin 9/2 isotopes with an even number of Pprotons and odd number of Nprotons are Type RC-4. Of the 106 selected isotopes, 4 are Type RC-4.

Above, the horizontal line is in the inspection plane. The vertical line, the photon path to the Rabi analyzer, is parallel to the constant magnetic field. The circle indicates the diameter of the molecular beam, and the crosshairs indicate the velocity of the beam is directed into the paper.

A molecular beam is not needed for the operation of a Rabi analyzer, all that is required is for an analytical sample (gas or liquid phase) comprising a large number of molecules containing a larger number of nuclei enclosing an even larger number of particles to be located at the intersection of the cross hairs.

The position of the horizontal inspection plane is irrelevant to Rabi analysis but it is crucial for spectroscopic analysis of flux loops.

Above left, the molecular beam (directed into the paper in the previous illustration) is directed from right to left, and the photon path to the Rabi analyzer is in the same location as in the previous illustration.

For spectroscopic analysis, the inspection plane is the plane defined by the direction the molecular beam formerly passed and the direction of the positive electric field when pointing up.

Above right, the inspection plane for spectroscopic analysis, is labelled at each corner. The dashed line in place of the former position of the molecular beam is an orthogonal axis (OA) passing through the direction of the positive side of the electric field when pointing up (UP),

and passing through the direction of the spectroscopic detectors (SD).

The intersection of OA, UP and SD is the location where the analytical sample (gas or liquid phase) is placed in the inspection plane. The electric field that orients particle Q-axes is in the inspection plane.

The detection of ten of the twelve Types of magnetic signatures (in the 106 selected isotopes) requires one of four alignments of directed electric fields: the positive side of the electric field pointing up, the positive side of the electric field pointing right, the positive side of the electric field pointing down, or the positive side of the electric field pointing left.

The four possible alignments of the electric field are illustrated on either side of the inspection plane (but in operation the entire breadth of the electric field points in the same direction) and the directed lines on the edges of the inspection plane represent the positions of thin wire cathodes that produce planar electric fields.

Prior to an operational run, the spectroscopic detectors are adjusted to optimally detect the magnetic properties of the isotope to be analyzed.

Above is a summary of isotope magnetic signatures.

Column 1 lists the twelve magnetic isotope Types.

In column 2, with the P-axes of particles oriented by a constant magnetic field directed up in the direction of the magnetic north pole and in the absence of a directed electric field, the magnetic signatures due to flipping odd Pproton P-axes (the arrow on the left of the vignette) and odd Nproton P-axes (the arrow on the right of the vignette) are illustrated.

See below, in the detailed discussion of Type B-1, for the reason there is a zero instead of an arrow in Types B-1 and B-3.

The magnetic signatures due to flux loops in the presence of the four orientations of an electric field, are given in columns 3, 4, 5 and 6 for electric fields directed up, directed down, directed to the right, or directed to the left.

In illustrations of flux loop magnetic signatures, if the arrows are oriented up and down the arrow on the left of the vignette represents the direction of Pproton flux loops and the arrow on the right represents the direction of Nproton flux loops, if the arrows are oriented left and right the arrow on the top of the vignette represents the direction of Pproton flux loops and the arrow on the bottom represents the direction of Nproton flux loops.

In total there are six directed orthogonal planes in Cartesian space but only four of these are represented in columns 3, 4, 5 and 6. This omission is due to the elliptical planar shape of magnetic flux loops: the missing orientations provide edge-on views without detectable magnetic signatures.

Type A-0

7N15, with 7 Pprotons and 8 Nprotons, is the lowest atomic number Type A-0 isotope. In Type A-0 isotopes the flux loops associated with Pprotons and Nprotons lie in a directed Cartesian plane without detectable flux loop signatures.

In an analytical sample, 50% of the odd (unpaired) Pproton P-axes will be oriented in one direction and 50% in the opposite direction. The orientation of the magnetic axes of the odd Pproton are flipped by the transverse oscillating magnetic field and the spectroscopic detectors sense two different magnetic signatures resulting in two peaks corresponding to a spin of ½.

Above is the magnetic signature of Type A-0. The left arrow pointing up is the direction of the odd Pproton P-axis after emission of a photon (previously the constant magnetic field aligned the Pproton P-axis in this orientation, then absorption of intrinsic energy from the transverse oscillating magnetic field flipped the axis to pointing down then, due to the 180 degree rotation of the P-Q axes with respect to the direction of the G-axis, the absorbed intrinsic energy was released as a photon when the axis was flipped back to pointing up). The arrow pointing down is the antiparallel direction of the P-axis of a paired Nproton (which does not emit a photon).

The experimental detection of Type A-0 isotopes requires a constant magnetic field oriented in the direction of magnetic north.

Type RA-0

6C13, with 6 Pprotons and 7 Nprotons, is the lowest atomic number Type RA-0 isotope. In Type RA-0 isotopes the flux loops associated with Pprotons and Nprotons lie in a directed Cartesian plane without detectable flux loop signatures.

In an analytical sample, 50% of the odd (unpaired) Nproton P-axes will be oriented in one direction and 50% in the opposite direction. The orientation of the magnetic axes of the odd Nproton are flipped by the transverse oscillating magnetic field and the spectroscopic detectors produce two different magnetic signatures resulting in two peaks corresponding to a spin of ½.

Above is the magnetic signature of Type RA-0. The left arrow pointing up is the direction of the P-axis of a paired Pproton (which does not emit a photon). The right arrow pointing down is the direction of the odd Nproton P-axis after emission of a photon (previously the constant magnetic field aligned the Nproton P-axis in this orientation, then absorption of intrinsic energy from the transverse oscillating magnetic field flipped the axis to pointing up then, due to the 180 degree rotation of the P-Q axes with respect to the direction of the G-axis, the absorbed intrinsic energy was released as a photon when the axis was flipped back to pointing down).

The experimental detection of Type RA-0 isotopes requires a constant magnetic field oriented in the direction of magnetic north.

Type B-1

3Li6, with 3 Pprotons and 3 Nprotons, is the lowest atomic number Type B-1 isotope. In isotopes with an odd number of Pprotons and Nprotons, the odd Pproton interacts with the electron in the odd Nproton preventing electron-Nproton decoupling by the constant magnetic field and the odd Nproton P-axis is unable to be flipped by the transverse oscillating magnetic field, but the electron-Pprotonis decoupled and the orientation of the odd Pproton magnetic axis is flipped by the transverse oscillating magnetic field, and the spectroscopic detectors, adjusted to optimally recognize the magnetic signatures of 3Li6, sense one distinctive magnetic signature, resulting in one peak.

In Type B-1, the odd Nproton P-axis is unable to be flipped thus there is no magnetic signature due to the Nproton itself, but both the Nproton and the Pproton have associated flux loops and spectroscopic detectors can sense the magnetic signatures of the flux loops in the presence of a directed electric field pointing up.

In the analysis of isotopes with detectable flux loop signatures there are four possible orientations of the directed electric fields. The magnetic flux loops associated with Type-1 isotopes are detectable if the directed electric field is pointing up. The magnetic flux loops associated with Type-2 isotopes are detectable if the directed electric field is pointing down. The magnetic flux loops associated with Type-3 isotopes are detectable if the directed electric field is pointing right. The magnetic flux loops associated with Type-4 isotopes are detectable if the directed electric field is pointing left.

Each of these directed electric field orientations require different experiments, therefore the results of five experiments (including one experiment without directed electric fields) are needed to fully establish the Type of an unknown isotope.

The flux loops circulating through particle P-axes can pass through all radial planes. The radial flux planes in the above diagram are in the plane of the paper demonstrating, when detected from opposite directions, flux loops will be CW (directed right-left) or CCW (directed left-right).

Since Pprotons and Nprotons are oppositely aligned, a CW Pproton signature is identical to an Nproton CCW signature, and a CCW Pproton signature is identical to an Nproton CW signature.

Because the magnetic signatures of the particles in the field of view of a detector are differently oriented, on average 50% of the flux loop magnetic signatures will be CW and 50% CCW. Of the 50% of the CW signatures 25% will be due to Pprotons and 25% due to Nprotons, and of the 50% of the CCW signatures 25% will be due to Pprotons and 25% due to Nprotons.

Thus, there will be two different magnetic signatures resulting to two peaks, but we are unable to distinguish which is due to CW Pproton flux loops or CCW Nproton flux loops, and which is due to CCW Pproton flux loops or CW Nproton flux loops.

In Type B-1, the magnetic signature due to the odd Pproton (experimentally determined in the absence of an electric field) has one peak, and the magnetic signature due to flux loops associated with Pprotons and Nprotons (experimentally determined in an electric field oriented parallel to the magnetic field) has two peaks, totaling three peaks corresponding to a spin of 1.

Here we come to a fundamental issue. Is the uncertainty in situations involving linked physical properties (complementarity) described by probability or is it causedby probability? In 1925 Werner Heisenberg theorized this type of uncertainty was caused by probability and that opinion became, along with intrinsic spin, an important part of the foundation of new quantum theory.

In nature, the orientation of the magnetic signatures of isotopes and the orientation of the nuclei containing the particles responsible for the magnetic signatures are random. The magnetic signatures due to a large number of randomly oriented particles are indistinguishable from background noise, but under the proper experimental conditions, the magnetic signatures are discernable.

The magnetic signatures of flux loops, imperceptible in nature, are perceptible when the Q-axes of the associated particles are aligned.

A constant magnetic field is not needed to detect the magnetic signatures of flux loops, but compared to the Rabi analyzer the inspection plane to detect the magnetic signatures of flux loops is in the identical position, and the directed orthogonal plane pointing up in the direction of magnetic north in the Rabi analyzer is the identical to the directed orthogonal plane pointing up in the direction of the positive electric field in the flux loops analyzer, that is, the direction of the electric field is parallel to the magnetic field.

Therefore, even though the magnetic field is not needed to detect the magnetic signatures of flux loops, if the magnetic field is present in addition to the directed electric field, its presence would not alter the experimental results, but it might provide additional information.

Here is a prediction of the present theory. If the experiment detecting the magnetic signature of Type B-1 is conducted in the presence of a constant magnetic field and a directed electric field pointing up, that one experiment will determine the magnetic signatures shown above plus two additional signatures: (1) the magnetic signature due to CW Pproton flux loops and CCW Nproton flux loops and (2) the magnetic signature due to CW Nproton flux loops and CCW Pproton flux loops.

This result would demonstrate the uncertainty in at least one situation involving linked physical properties is described by probability but is not causedby probability. This and other experiments yet to be devised, will overturn the concept of causation by probability, and validate Einstein’s intuition that God “does not play dice with the universe.”17

Type C-1

3Li7, with 3 Pprotons and 4 Nprotons, is the lowest atomic number Type C-1 isotope.

As in Type A-0, in a constant magnetic field absent electric fields the magnetic signature due to an odd particle has two peaks. As in Type B-1, the magnetic signature due to flux loops in a directed electric field pointing up has two peaks. In total, Type C-1 isotopes have four peaks corresponding to a spin of 3/2.

Type RC-1

4Be9, with 4 Pprotons and 5 Nprotons, is the lowest atomic number RC-1 isotope.

As in Type RA-0, in a constant magnetic field absent electric fields the magnetic signature due to an odd particle has two peaks. As in Type B-1, the magnetic signature due to flux loops in a directed electric field pointing up has two peaks. In total, Type RC-1 isotopes have four peaks corresponding to a spin of 3/2.

Type C-2

13Al27, with 13 Pprotons and 14 Nprotons, is the lowest atomic number Type C-2 isotope.

As in Type A-0, in a constant magnetic field absent electric fields the magnetic signature due to an odd particle has two peaks. As in Type B-1, the magnetic signature due to flux loops in a directed electric field pointing up has two peaks.

In the identification of Type C-2, the flux loops of an odd particle, determined in an electric field pointing down, has two peaks. In total, Type C-2 isotopes have six peaks corresponding to a spin of 5/2.

Type RC-2

8O17, with 8 Pprotons and 9 Nprotons, is the lowest atomic number Type RC-2 isotope. 8O17 has one odd Nproton and no odd Pprotons.

As in Type RA-0, in a constant magnetic field absent electric fields the magnetic signature due to an odd particle has two peaks. As in Type B-1, the magnetic signature due to flux loops in a directed electric field pointing up has two peaks.

In the identification of Type RC-2, the flux loops of an odd particle, determined in an electric field pointing down, has two peaks. In total, Type RC-2 isotopes have six peaks corresponding to a spin of 5/2.

Type B-3

5B10, with 5 Pprotons and 5 Nprotons, is the lowest atomic number Type B-3 isotope.

As in Type A-0, in a constant magnetic field absent electric fields the magnetic signature due to an odd particle has two peaks. As in Type B-1, the magnetic signature due to flux loops in a directed electric field pointing up has two peaks. As in Type C-2, the flux loops of an odd particle, determined in an electric field pointing down, has two peaks.

In the identification of Type B-3, the odd Pproton flux loops, determined in an electric field pointing right, has two peaks. In total, Type B-3 isotopes have seven peaks corresponding to a spin of 3.

A-3

21Sc45, with 21 Pprotons and 24 Nprotons, is the lowest atomic number Type A-3 isotope.

As in Type A-0, in a constant magnetic field absent electric fields the magnetic signature due to an odd particle has two peaks. As in Type B-1, the magnetic signature due to flux loops in a directed electric field pointing up has two peaks. As in Type C-2, the flux loops of an odd particle, determined in an electric field pointing down, has two peaks. As in Type B-3, the magnetic signature due to flux loops in a directed electric field pointing right, has two peaks. In total, Type A-3 isotopes have eight peaks corresponding to a spin of 7/2.

RA-3

20Ca43, with 20 Pprotons and 23 Nprotons, is the lowest atomic number Type RA-3 isotope.

As in Type RA-0, in a constant magnetic field absent electric fields the magnetic signature due to an odd particle has two peaks. As in Type B-1, the magnetic signature due to flux loops in a directed electric field pointing up has two peaks. As in Type RC-2, the flux loops of an odd particle, determined in an electric field pointing down, has two peaks. As in Type B-3, the magnetic signature due to flux loops in a directed electric field pointing right, has two peaks. In total, Type RA-3 isotopes have 8 peaks corresponding to a spin of 7/2.

C-4

41NB93, with 41 Pprotons and 52 Nprotons, is the lowest atomic number Type C-4 isotope.

As in Type A-0, in a constant magnetic field absent electric fields the magnetic signature due to an odd particle has two peaks. As in Type B-1, the magnetic signature due to flux loops in a directed electric field pointing up has two peaks. As in Type C-2, the flux loops of an odd particle, determined in an electric field pointing down, has two peaks. As in Type B-3, the magnetic signature due to flux loops in a directed electric field pointing right, has two peaks. In the identification of Type C-4, the odd Nproton flux loops, determined in an electric field pointing left, has two peaks. In total, Type C-4 isotopes have 10 peaks corresponding to a spin of 9/2.

RC-4

32Ge73, with 32 Pprotons, 41 Nprotons, is the lowest atomic number Type RC-4 isotope.

As in Type RA-0, in a constant magnetic field absent electric fields the magnetic signature due to an odd particle has two peaks. As in Type B-1, the magnetic signature due to flux loops in a directed electric field pointing up has two peaks. As in Type RC-2, the flux loops of an odd particle, determined in an electric field pointing down, has two peaks. As in Type B-3, the magnetic signature due to flux loops in a directed electric field pointing right, has two peaks. In the identification of Type RC-4, the odd Nproton flux loops, determined in an electric field pointing left, has two peaks. In total, Type RC-4 isotopes have 10 peaks corresponding to a spin of 9/2.


ZNZ+NSpinPeaksType







7N1578150.52A-0
9F19910190.52A-0
15P311516310.52A-0
39Y893950890.52A-0
45Rh10345581030.52A-0
47Ag10947621090.52A-0
47Ag10747601070.52A-0
69Tm169691001690.52A-0
81Tl203811222030.52A-0
81Tl205811242050.52A-0
6C1367130.52RA-0
14Si291415290.52RA-0
26Fe572631570.52RA-0
34Se773443770.52RA-0
48Cd11148631110.52RA-0
50Sn11750671170.52RA-0
50Sn11550651150.52RA-0
52Te12552731250.52RA-0
54Xe12954751290.52RA-0
74W183741091830.52RA-0
76Os187761111870.52RA-0
78Pt195781171950.52RA-0
80Hg199801191990.52RA-0
82Pb207821252070.52RA-0







3Li63361.03B-1
7N1477141.03B-1







3Li73471.54C-1
5B1156111.54C-1
11Na231112231.54C-1
17Cl351718351.54C-1
17Cl371720371.54C-1
19K391920391.54C-1
19K411922411.54C-1
29Cu632934631.54C-1
29Cu652936651.54C-1
31Ga693138691.54C-1
31Ga713140711.54C-1
33As753342751.54C-1
35Br793544791.54C-1
35Br813546811.54C-1
65Tb15965941591.54C-1
77Ir193771161931.54C-1
77Ir191771141911.54C-1
79Au197791181971.54C-1
4Be94591.54RC-1
10Ne211011211.54RC-1
16S331617331.54RC-1
24Cr532429531.54RC-1
28Ni612833611.54RC-1
54Xe13154771311.54RC-1
56Ba13556791351.54RC-1
56Ba13756811371.54RC-1
64Gd15564911551.54RC-1
64Gd15764931571.54RC-1
76Os189761131891.54RC-1
80Hg201801212011.54RC-1







13Al271314272.56C-2
25Mn512526512.56C-2
25Mn552530552.56C-2
37Rb853748852.56C-2
51Sb12151701212.56C-2
53I12753741272.56C-2
59Pr14159821412.56C-2
61Pm14561841452.56C-2
63Eu15163881512.56C-2
63Eu15363901532.56C-2
75Re185751101852.56C-2
8O1789172.56RC-2
12Mg251213252.56RC-2
22Ti472225472.56RC-2
30Zn673037672.56RC-2
40Zr914051912.56RC-2
42Mo954253952.56RC-2
42Mo974255972.56RC-2
44Ru10144571012.56RC-2
44Ru994455992.56RC-2
46Pd10546591052.56RC-2
66Dy16166951612.56RC-2
66Dy16366971632.56RC-2
70Yb173701031732.56RC-2







5B1055103.07B-3
11Na221111223.07B-3







21Sc452124453.58A-3
23V512328513.58A-3
27Co592732593.58A-3
51Sb12351721233.58A-3
55Cs13355781333.58A-3
57La13957821393.58A-3
67Ho16567981653.58A-3
71Lu175711041753.58A-3
73Ta181731081813.58A-3
20Ca432023433.58RA-3
22Ti492227493.58RA-3
60Nd14360831433.58RA-3
60Nd14560851453.58RA-3
62Sm14962871493.58RA-3
68Er16768991673.58RA-3
72Hf177721051773.58RA-3
92U235921432353.58RA-3







41Nb934152934.510C-4
49In11349641134.510C-4
83Bi209831262094.510C-4
32Ge733241734.510RC-4
36Kr833647834.510RC-4
38Sr873849874.510RC-4
72Hf179721071794.510RC-4

ZNZ+NSpinPeaksType







3Li63361.03B-1
3Li73471.54C-1
4Be94591.54RC-1
5B1055103.07B-3
5B1156111.54C-1
6C1367130.52RA-0
7N1477141.03B-1
7N1578150.52A-0
8O1789172.56RC-2
9F19910190.52A-0
10Ne211011211.54RC-1
11Na231112231.54C-1
11Na221111223.07B-3
12Mg251213252.56RC-2
13Al271314272.56C-2
14Si291415290.52RA-0
15P311516310.52A-0
16S331617331.54RC-1
17Cl351718351.54C-1
17Cl371720371.54C-1
19K391920391.54C-1
19K411922411.54C-1
20Ca432023433.58RA-3
21Sc452124453.58A-3
22Ti472225472.56RC-2
22Ti492227493.58RA-3
23V512328513.58A-3
24Cr532429531.54RC-1
25Mn512526512.56C-2
25Mn552530552.56C-2
26Fe572631570.52RA-0
27Co592732593.58A-3
28Ni612833611.54RC-1
29Cu632934631.54C-1
29Cu652936651.54C-1
30Zn673037672.56RC-2
31Ga693138691.54C-1
31Ga713140711.54C-1
32Ge733241734.510RC-4
33As753342751.54C-1
34Se773443770.52RA-0
35Br793544791.54C-1
35Br813546811.54C-1
36Kr833647834.510RC-4
37Rb853748852.56C-2
38Sr873849874.510RC-4
39Y893950890.52A-0
40Zr914051912.56RC-2
41Nb934152934.510C-4
42Mo954253952.56RC-2
42Mo974255972.56RC-2
44Ru10144571012.56RC-2
44Ru994455992.56RC-2
45Rh10345581030.52A-0
46Pd10546591052.56RC-2
47Ag10747601070.52A-0
47Ag10947621090.52A-0
48Cd11148631110.52RA-0
49In11349641134.510C-4
50Sn11550651150.52RA-0
50Sn11750671170.52RA-0
51Sb12151701212.56C-2
51Sb12351721233.58A-3
52Te12552731250.52RA-0
53I12753741272.56C-2
54Xe12954751290.52RA-0
54Xe13154771311.54RC-1
55Cs13355781333.58A-3
56Ba13556791351.54RC-1
56Ba13756811371.54RC-1
57La13957821393.58A-3
59Pr14159821412.56C-2
60Nd14360831433.58RA-3
60Nd14560851453.58RA-3
61Pm14561841452.56C-2
62Sm14962871493.58RA-3
63Eu15163881512.56C-2
63Eu15363901532.56C-2
64Gd15564911551.54RC-1
64Gd15764931571.54RC-1
65Tb15965941591.54C-1
66Dy16166951612.56RC-2
66Dy16366971632.56RC-2
67Ho16567981653.58A-3
68Er16768991673.58RA-3
69Tm169691001690.52A-0
70Yb173701031732.56RC-2
71Lu175711041753.58A-3
72Hf177721051773.58RA-3
72Hf179721071794.510RC-4
73Ta181731081813.58A-3
74W183741091830.52RA-0
75Re185751101852.56C-2
76Os187761111870.52RA-0
76Os189761131891.54RC-1
77Ir191771141911.54C-1
77Ir193771161931.54C-1
78Pt195781171950.52RA-0
79Au197791181971.54C-1
80Hg199801191990.52RA-0
80Hg201801212011.54RC-1
81Tl203811222030.52A-0
81Tl205811242050.52A-0
82Pb207821252070.52RA-0
83Bi209831262094.510C-4
92U235921432353.58RA-3

In GAP, the gyromagnetic ratio of a nucleus is equal to the product of the INDC isotope g-factor and the CODATA nuclear magneton divided by the product of the INDC intrinsic spin and the CODATA reduced Plank constant, and the magnetic moment of a nucleus is equal to the product of the INDC isotope g-factor and the CODATA nuclear magneton.

In discrete physics, the magnetic moment of a nucleus is equal the product of two times the interactional spin (converts spin to number of odd Pprotons and/or odd Nprotons), the kinetic steric factor (converts molecular beam thermal energy into Joules), Lambda-bar, and the GAP value for the gyromagnetic ratio (assumed correct).

In the 106 isotopes tested, the ratio of the INDC isotope magnetic moment divided by the value denominated in discrete units is equal to 1.0288816.

The difference can be narrowed by adjustment but cannot be eliminated because CODATA constants are not exactly reconciled.

Part Four

Particle acceleration

Einstein believed mass was constant and many of his revolutionary discoveries were based on that concept. Constancy of mass is an eminently reasonable assumption because Newtonian equations are also founded on mass conservation and in the majority of situations his equations accurately predict the observables. But in fact, as was succinctly expressed in his letter to Richard Bentley, his equations do not correspond to physical reality.18

Einstein also believed the speed of light was constant and since kinetic energy is proportional to mass and velocity, he concluded that the mass of a particle increases with velocity and approaches (but never reaches) a maximum value as the velocity approaches the speed of light. In special relativity he was able to derive, in a few simple equations, the relativistic momentum and energy (mass-energy) of a particle.

In general relativity, Einstein’s field equations described the curvature of space-time in intense gravitational fields in agreement with the measured value for the precession of the perihelion of mercury. It seems likely the field equations were derived with that result in mind. Even so, this approach is eminently justifiable because measurables are valid assumptions for a physical theory.

Einstein’s prediction that the curvature of space-time in intense gravitational fields was not only responsible for the precession of the perihelion of mercury but would also bend rays of light was verified in two astronomical expeditions led by Arthur Eddington and Andrew Crommelin. Their observations were acclaimed as verification of general relativity and today the curvature of space-time is considered by most scientists to be undisputed.

Unfortunately, this undisputed theory cannot determine the velocity of a relativistically accelerated electron or proton and does not provide a mechanism for the increase in energy and mass (mass-energy).

The present theory derives the velocity and mass-energy of accelerated electrons and protons, and provides a mechanism.

In particle acceleration, charged particles are electrostatically formed into a linear beam and accelerated, then injected into a circular accelerator (or cyclotron) where they are magnetically formed into a circular beam and further accelerated by oscillating magnetic fields. Particle acceleration in linear and circular beams is mediated by chirality meshing interactions.

An electrostatic voltage is the emission of quantons:

  • In electrostatic acceleration of negatively charged particles between a negative cathode on the left emitting CCW quantons and a positive anode on the right emitting CW quantons, chirality meshing absorptions of CCW quantons results in repulsive deflections (voltage acceleration) to the right and chirality meshing absorptions of CCW quantons results in attractive deflections (voltage acceleration) to the right.
  • If positively charged particles are between a negative cathode on the left emitting CCW quantons and a positive anode on the right emitting CW quantons, chirality meshing absorptions of CW quantons results in attractive deflections (voltage acceleration) to the left and chirality meshing absorptions of CW quantons results in repulsive deflections (voltage acceleration) to the left.

Quantons are also produced transverse to a magnetic field with CCW quantons emitted by the magnetic North pole and CW quantons emitted by the magnetic South pole:

  • In acceleration by a transverse oscillating magnetic field, charged particles are alternately pushed (repulsively deflected) from one direction and pulled (attractively deflected) from the opposite direction.
  • Negatively charged particles are alternately pushed (deflected in the direction of the positive anode) due to the absorption of CCW quantons and pulled (deflected in the direction of the positive anode) due to the absorption of CW quantons.
  • Positively charged particles are alternately pulled (deflected in the direction of the negative cathode) due to the absorption of CCW quantons, and pushed (deflected in the direction of the negative cathode) due to the absorption of CW quantons.

In either case (electrostatic voltage or oscillating magnetic voltage) the energy of simultaneous acceleration by oppositely directed voltages is proportional to the square of the voltage.

A chirality meshing absorption of a quanton increases the intrinsic energy of a particle and produces an intrinsic deflection that increases the particle velocity. Like kinetic acceleration, an intrinsic deflection increases the velocity but does so without the dissipation of kinetic energy.

The number of particles and quantons is directly proportional to the intrinsic Josephson constant: 3.0000E15 quantons are absorbed by 3.0000E15 particles per second per Volt. At 400 Volts 1.2000E18 quantons are absorbed by 1.2000E18 particles per second; and at 250,000 Volts 7.5000E20 quantons are absorbed by 7.5000E20 particles per second.

Each quanton absorption produces a deflection (acceleration) equal to the square root of Lambda-bar divided by the particle amplitude. Quanton absorption by an electron produces a deflection of 2.5327E-18 meters, and quanton absorption by a proton produces a deflection of 2.0680E-19 meters.

The number of chirality meshing interactions is equal to the square of the voltage divided by the square root of Lambda-bar. The intrinsic energy absorbed by a particle in a chirality meshing interaction is equal to the product of the number of chirality meshing interactions and Lambda-bar, divided by the number of particles. The accelerated particle intrinsic energy is equal to the sum of the particle intrinsic energy plus the intrinsic energy absorbed by the particle in a chirality meshing interaction.

The kinetic mass-energy in units of Joule is equal to the product of the accelerated particle intrinsic energy, the square of the photon velocity, and the ratio of the discrete Planck constant divided by Lambda-bar.

Electron acceleration

Below left, the GAP equation for electron velocity due to electrostatic or electromagnetic voltage is equal to the square root of the ratio of the product of 2, the CODATA elementary charge (units of Coulomb) and the voltage, divided by the CODATA electron mass (units of kilogram).

Above right, the discrete equation for electron velocity due to electrostatic or electromagnetic voltage is equal to the square root of the ratio of the product of 2, the charge intrinsic energy and the voltage, divided by the electron intrinsic energy.

The velocity calculated by the GAP equation is higher than the discrete equation by a factor of 1.007697. The difference can be narrowed by adjustment but cannot be eliminated because CODATA constants are not reconciled.

The analysis of electron acceleration includes a range of ten voltages between a minimum voltage and a maximum voltage. The maximum voltage is equal to a few millivolts less than the theoretical voltage required to accelerate an electron to the photon velocity (an impossibility), which, if calculated to fifteen significant digits, is 259807.621135332 Volts.

Top row column 1, the voltages used in this example analysis are 1, 100, 400, 800, 4000, 10000, 25000, 100000, 250000, and 259807.621135 Volts. The highest voltage, calculated to thirteen significant digits, exactly converts to the photon velocity (an impossibility) to eleven significant digits but is less than the photon velocity (the correct result) at 12 significant digits (this is an excellent example of a discretely exact property).

The equations following, calculations for 100 Volts, are identical to the equations for any other of the nine voltages, or for any other range of ten voltages greater than zero and less than the theoretical maximum.

Top row column 2, the calculated electron velocity per the discrete equation.

Top row column 3, the number of accelerated (deflected) electrons is equal to the ratio of the voltage divided by the intrinsic electron magnetic flux quantum.

Top row column 4, the deflection per quanton is equal to the square root of Lambda-bar divided by the electron amplitude.

This is the deflection of a chirality meshing interaction between a quanton and an electron.

Bottom row column 1, the number of chirality meshing interactions is equal to the square of the voltage divided by the square root of Lambda-bar.

Bottom row column 2, the increase in intrinsic energy per electron due to chirality meshing interactions, equal to the product of the number of chirality meshing interactions and Lambda-bar divided by the number of electrons, is denominated in units of Einstein.

Bottom row column 3, the accelerated electron energy is equal to the sum of the electron intrinsic energy and the increase in intrinsic energy per electron.

Bottom row column 4, the mass-energy in units of Joule is equal to the product of the accelerated electron intrinsic energy, the square of the photon velocity and the ratio of the discrete Planck constant divided by Lambda-bar.

Proton acceleration

The analysis of proton acceleration includes a range of ten voltages between a minimum voltage and a maximum voltage. For purposes of comparison, we specify the same voltages as used for the electron.

The theoretical voltage required to accelerate a proton to the photon velocity (an impossibility) is 38971143.1702997 Volts. Any voltage less than this theoretical maximum will accelerate a proton to less than the photon velocity.

The voltage range used in this example analysis is 1, 100, 400, 800, 4000, 10000, 25000, 100000, 250000, and 259807.621135 Volts. The equations below, the calculations for 100 Volts, are identical to the equations for any other accelerating voltage range greater than zero and less than the theoretical maximum.

The analysis of proton acceleration includes a range of ten voltages between a minimum voltage and a maximum voltage. The maximum voltage is equal to a few millivolts less than the theoretical voltage required to accelerate a proton to the photon velocity (an impossibility), which, if calculated to fifteen significant digits, is 259807.621135332 Volts.

Below left, the GAP equation for proton velocity due to electrostatic or electromagnetic voltage is equal to the square root of the ratio of the product of 2, the CODATA elementary charge (units of Coulomb) and the voltage, divided by the CODATA proton mass (units of kilogram).

Above right, the discrete equation for proton velocity, due to electrostatic or electromagnetic voltage, is equal to the square root of the ratio of the product of 2, the charge intrinsic energy (in units of intrinsic Volt) and the voltage, divided by the proton intrinsic energy (in units of Einstein).

The discrete proton velocity is lower than the discrete electron velocity by the square root of 150 (the square root of the proton amplitude).

The equations below, calculations for 100 Volts, are identical to the equations for any other of the nine voltages, or for any other range of ten voltages greater than zero and less than the theoretical maximum.

Top row column 1, the voltages used in this example analysis are 1, 100, 400, 800, 4000, 10000, 25000, 100000, 250000, and 259807.621135 Volts. The highest voltage, calculated to thirteen significant digits, exactly converts to the photon velocity (an impossibility) to eleven significant digits but is less than the photon velocity (the correct result) at 12 significant digits.

The equations following, calculations for 100 Volts, are identical to the equations for any other of the nine voltages, or for any other range of ten voltages greater than zero and less than the theoretical maximum.

Top row column 2, the calculated proton velocity per the discrete equation.

Top row column 3, the number of accelerated (deflected) protons is equal to the ratio of the voltage divided by the intrinsic electron magnetic flux quantum.

Top row column 4, the deflection per quanton is equal to the square root of Lambda-bar divided by the proton amplitude.

This is the deflection of a chirality meshing interaction between a quanton and a proton.

Bottom row column 1, the number of chirality meshing interactions is equal to the square of the voltage divided by the square root of Lambda-bar.

Bottom row column 2, the increase in intrinsic energy per proton due to chirality meshing interactions, equal to the product of the number of chirality meshing interactions and Lambda-bar divided by the number of protons, is denominated in units of Einstein.

Bottom row column 3, the accelerated proton energy is equal to the sum of the intrinsic proton energy and the increase in intrinsic energy per proton.

Bottom row column 4, the mass-energy in units of Joule is equal to the product of the accelerated proton intrinsic energy, the square of the photon velocity and the ratio of the discrete Planck constant divided by Lambda-bar.

Part Five

Atomic Spectra

The Rydberg equations correspond to high accuracy with the hydrogen spectral series and the Newtonian equations correspond to high accuracy with orbital motion but, despite many years of considerable effort, physicists have been unable to account for the spectrum of helium or for non-Newtonian stellar rotation curves.

Previously, we reformulated the Newtonian equations and explained stellar rotation curves. In this chapter we will reformulate the Rydberg equations for the spectral series of hydrogen and derive a general explanation for atomic spectra.

The equation formulated by Johann Balmer in 1885, in which the hydrogen spectrum wave numbers are proportional to the product of a constant and the difference between the inverse square of two integers, is correct, but the Bohr Model is not.

The electron is not a point particle, the electron does not orbit the proton, the force conveyed by an electron is not transmitted an infinite distance, at an infinitesimal distance the force is not infinite, electrons with lower energy and lower wave number are closer to the proton, and electrons with higher energy and higher wave number are further away from the proton (the Bohr distance-energy relationship must be reversed).

In hydrogen an electron and proton are engaged in a positional resonance. In atoms larger than hydrogen many electrons and protons are engaged in positional resonances. Each resonance is between one electron external to the nucleus and one proton internal to the nucleus, in which the electron and the nuclear proton are facing in opposite directions and each particle emits quantons that are absorbed by the other particle. On emission by the electron the quanton is CCW and on emission by the nuclear proton the quanton is CW. On emission the emitting particle recoils by a distance proportional to the particle intrinsic energy and on absorption the absorbing particle is attractively deflected (a chirality meshing interaction) by a distance proportional to the particle intrinsic energy. The result is a sustained positional resonance of a CCW quanton emitted in one direction by the electron and absorbed by the nuclear proton and a CW quanton emitted in the opposite direction by the nuclear proton and absorbed by the electron.

In the hydrogen atom, the resonance can be situated at any one of several quantized positions proportional to energy and corresponding to spectral emission and absorption lines. On emission of a photon the energy of the resonance decreases, and the electron drops to the adjacent lower energy level. On absorption of a photon the energy of the resonance increases, and the electron jumps to the adjacent higher energy level. The highest stable energy level, corresponding to an emission-only line, the maximum electron-proton separation distance beyond which the positional resonance no longer exists, is the hydrogen ionization energy.

The above paragraphs summarize the spectral mechanism which, for the time being, shall be considered a hypothesis.

The intrinsic to kinetic energy factor is equal to the ratio of the discrete Planck constant divided by the Coulomb divided by the ratio of Lambda-bar divided by the charge intrinsic energy, the ratio of the discrete Planck constant divided by the product of Lambda-bar and the square root of the proton amplitude divided by two, and two times the intrinsic steric factor.

The ionization energy of hydrogen (in larger atoms the ionization energy required to remove the last electron) is a discretely exact single value above which the atom no longer exists. The measured energy of hydrogen ionization is 1312 kJ/mol, and the corresponding CRC value is 13.59844 (units of kinetic electron Volts).19 Kinetic electron Volts divided by Omega-2 equals intrinsic Volts (units of Joule), which divided by 12 (the intrinsic to kinetic energy factor) equals intrinsic Volts (units of Einstein), which multiplied by the intrinsic electron charge equals intrinsic energy, which divided by Lambda-bar is equal to the photon frequency of hydrogen ionization.

Working backwards from the calculation sequences above, the discretely exact value of the photon ionization frequency is 3.28000000E15.

The intrinsic energy of hydrogen ionization, denominated in units of Einstein, is equal to the product of the photon frequency and Lambda-bar.

The intrinsic energy of hydrogen ionization, denominated in units of Joule, is equal to the product of the photon frequency and the discrete Planck constant.

The intrinsic voltage of hydrogen ionization, denominated in units of Einstein, is equal to the product of the photon frequency and Lambda-bar, divided by the charge intrinsic energy.

The ratio of the intrinsic voltage of hydrogen ionization divided by Psi is equal to the discrete Rydberg constant and denominated in units of inverse meter (spatial frequency).

The intrinsic voltage of hydrogen ionization, denominated in units of Joule, is equal to the product of 12 (the intrinsic to kinetic energy factor) and the discrete Rydberg constant, and the product of the photon frequency and the discrete Planck constant, divided by the Coulomb.

The kinetic voltage of hydrogen ionization, denominated in units of electron Volt, is equal to the product of the intrinsic voltage of hydrogen ionization and omega-2.

The difference between the above calculated energy of ionization and the CRC value is less than 0.30%. The poor accuracy is due to the performance standards of calorimeters.20 In the measurement of a sample against a calibration standard, a statistical analysis of the results will show the data lie within three standard deviations (sigma-3) of the mean (the expected value) and the accuracy will be 0.15% (99.85% of the measurements will lie in the range of higher than the calibration standard by no more than 0.15% or lower than the calibration standard by no more than 0.15%). If the identical procedure is used without prior knowledge of the expected result and whether the measurement is higher or lower than the actual value is unknown, the accuracy falls to no more than 0.30%.

The calculated value of the kinetic voltage of hydrogen ionization divided by the measured CRC value, expressed as a percentage, is 0.2666%.

Spectral series consist of a number of emission-absorption lines with a lower limit on the left and an upper limit on the right. Both limits are asymptotes: the lower limit corresponds to minimum energy, minimum frequency, and maximum wavelength; and the upper limit corresponds to maximum energy, maximum frequency, and minimum wavelength.

The below diagram of the Lyman spectral series consists of seven black emission-absorption lines to the left and a red emission-only line on the right. From left to right these lines are the Lyman lower limit (Lyman-A), Lyman-B, Lyman-C, Lyman-D, Lyman-E, Lyman-F, Lyman-G, and the Lyman upper limit.

The Rydberg equation expresses the wave numbers of the hydrogen spectrum equal to the product of the discrete Rydberg constant and the difference between the inverse square of the m-index minus the inverse square of the n-index.

The m-index has a constant value for each spectral series within the hydrogen spectrum. The six series ordered by highest energy (at the series upper limit) are Lyman, Balmer, Paschen, Brackett, Pfund and Humphreys.

Each line of a spectral series can be expressed in terms of energy, wave number, wavelength and photon frequency. The energy, wave number, and frequency increase from left to right, but the wavelength decreases from left to right.

For each spectral series the m-index increases from lowest to highest positional energy (Lyman = 1, Balmer = 2, Paschen = 3, Brackett = 4, Pfund = 5, Humphreys = 6). Each spectral series is composed of a sequence of lines (A, B, C, D, E, F, G) in which the n-index is equal to m+1, m+2, m+3, m+4, etc.

In the following analysis we will apply the Rydberg formula to calculate, based on the discretely exact value of the photon ionization frequency of 3.280000E15, the values for energy, wave number and frequency of the six spectral series of hydrogen.

The below calculations begin with the discretely exact values for the Lyman limit photon frequency and the hydrogen ionization energy (intrinsic voltage units of Joule), and the value of the discrete Rydberg constant.

The Lyman upper limit is an emission-only line because at any energy above the Lyman upper limit the hydrogen atom no longer exists. The calculation for the line prior to the Lyman upper limit is based on an n-index equal to 8, but there are additional discernable lines after Lyman-G because the Lyman upper limit is an asymptote. The identical situation holds for the limit of any spectral series.

The spectral series lower limit, the A-line (Lyman-A, Balmer-A, etc.) is also an asymptote and there are additional discernable lines between the C-line and the A-line. The number of lines included in a spectral series analysis is optional, but it is convenient to use the same number of lines in spectral series to be compared.

In this presentation, 8 Lyman and Balmer lines are included because these lines are specified in at least one of the easily available online sources. In the Paschen, Brackett, Pfund and Humphreys spectral series, 6 lines are included because these are also easily available.21

The ratio of the Lyman upper limit divided by the upper limit of another hydrogen spectral series is equal to the square of the m-index of the other series:

  • The Lyman upper limit divided by the Balmer upper limit is equal to 4.
  • The Lyman upper limit divided by the Paschen upper limit is equal to 9.
  • The Lyman upper limit divided by the Brackett upper limit is equal to 16.
  • The Lyman upper limit divided by the Pfund upper limit is equal to 25.
  • The Lyman upper limit divided by the Humphreys upper limit is equal to 36.

The ratio of the Lyman spectral series upper limit divided by the Lyman spectral series lower limit is equal to the ratio of the Rydberg wave number calculation for the upper limit divided by the Rydberg wave number calculation for the lower limit.

In all spectral series the Rydberg ratio is equal to the upper limit energy divided by the lower limit energy, the ratio of the upper limit structural frequency divided by the lower limit structural frequency, and the ratio of the lower limit wavelength divided by the upper limit wavelength.

The ratio of the Balmer spectral series upper limit divided by the Balmer spectral series lower limit is equal to the ratio of the Rydberg wave number calculation for the upper limit divided by the Rydberg wave number calculation for the lower limit.

The same calculation is used for the other four hydrogen spectral series:

  • The ratio of the Paschen spectral series upper limit divided by the Paschen lower limit is equal to 1312/574 (2.285714).
  • The ratio of the Brackett spectral series upper limit divided by the Brackett lower limit is equal to 25/9 (2.777777).
  • The ratio of the Pfund spectral series upper limit divided by the Pfund lower limit is equal to 36/11 (3.272727).
  • The ratio of the Humphreys spectral series upper limit divided by the Humphreys lower limit is equal to 3.769230.

Above, the frequencies under the A, B, C, D, E, F, G-lines and the series limit are the positional structural frequencies, and the transition frequencies between lines (B-A, C-B … F-E, G-F) are the photon emission-absorption frequencies.

The structural frequency of the G-line is equal to the product of the Rydberg calculated wave number and the photon velocity. The energy of the G-line (intrinsic Volts units of Joule) is equal to the product of the structural frequency of the G-line and the Coulomb divided by the discrete Planck constant.

The structural frequency of the F-line is equal to the product of the Rydberg calculated wave number and the photon velocity. The energy of the F-line (intrinsic Volts units of Joule) is equal to the product of the structural frequency of the F-line and the Coulomb divided by the discrete Planck constant.

The photon emission-absorption frequency of the G-F transition is equal to the structural frequency of the G-line minus the structural frequency of the F-line. The energy of the G-F transition (intrinsic Volts units of Joule) is equal to the energy of the G-line minus the energy of the F-line.

The identical process is used to calculate the emission-absorption frequencies and energies for all spectral series.

Note there is no transition frequency or energy between the G-line and the series limit because the series limit is emission-only.

Lyman series transition photons identical to Balmer series photons:

  • When a Lyman-C positional resonance drops down to Lyman-B, the Lyman-C energy is emitted as two photons: a 11.662222 Vi(J) Lyman-B photon frequency 2.915555E15 and a 0.637777 Vi(J) Lyman C-B photon frequency 1.594444E14. The frequency and wavelength of the transition photon is identical to the Balmer B-A transition photon.
  • When a Lyman-D positional resonance drops down to Lyman-C, the Lyman-D energy is emitted as two photons: a 12.300000 Vi(J) Lyman-C photon frequency 3.075000E15 and a 0.295200 Vi(J) Lyman D-C photon frequency 7.380000E13. The frequency and wavelength of the transition photon is identical to the Balmer C-B transition photon.
  • When a Lyman-E positional resonance drops down to Lyman-D, the Lyman-E energy is emitted as two photons: a 12.595200 Vi(J) Lyman-D photon frequency 3.148800E15 and a 0.160356 Vi(J) Lyman E-D photon frequency 4.008888E13. The frequency and wavelength of the transition photon is identical to the Balmer D-C transition photon.
  • When a Lyman-F positional resonance drops down to Lyman-E, the Lyman-F energy is emitted as two photons: a 12.755555 Vi(J) Lyman-E photon frequency 3.188888E15 and a 0.096689 Vi(J) Lyman F-E photon frequency 2.41723E13. The frequency and wavelength of the transition photon is identical to the Balmer E-D transition photon.
  • When a Lyman-G positional resonance drops down to Lyman-F, the Lyman-G energy is emitted as two photons: a 12.852245 Vi(J) Lyman-F photon frequency 3.21306E15 and a 0.062755 Vi(J) Lyman G-F photon frequency 1.568878E13. The frequency and wavelength of the transition photon is identical to the Balmer F-E transition photon.

The equivalence of Balmer-A and Lyman series transitions can be extended to the Paschen, Brackett, Pfund and Humphreys series.

The Lyman C-B transition is equal to the energy and frequency of Paschen-A.

The Lyman D-C transition is equal to the energy and frequency of Brackett-A.

The Lyman E-D transition is equal to the energy and frequency of Pfund-A.

The Lyman F-E transition is equal to the energy and frequency of Humphreys-A.

An explanation of atomic spectra begins with the ionization energies.

In atoms with more than one proton, the discretely exact energy (in red) for elemental ionization energy above which the atom no longer exists, is equal to product of the square of the number of protons times the discretely exact value for the hydrogen ionization energy. The intermediate ionization energies (in blue) are equal to the CRC value divided by omega-2.

The ionization frequency is equal to the product of the ionization energy and the Coulomb divided by the discrete Planck constant.

The ionization wave number is equal to the ionization frequency divided by the photon velocity.

The photon wavelength is the inverse of the wave number.

The difference between the calculated and measured value for the hydrogen ionization energy, divided by the difference between the measured wavelength and calculated wavelength for hydrogen ionization is very nearly equal to the difference between the photon velocity and the speed of light.

The difference between these two values, independent of how it is calculated, is a measurement error term of approximately 0.00468%.

The differences between the measured and calculated values for hydrogen are of no concern and, even though the Rydberg equations derive the measurable wavelengths to high accuracy, the explanation requiring the simultaneous emission of two photons is not consistent with the spectral mechanism hypothesis.

The Rydberg explanation for the emission of atomic spectra requires two frequencies:

  • One frequency is the structural frequency. Structural frequency is proportional to the energy of the positional resonance between an electron and proton (the energy required to hold the electron and proton in the positional resonance).
  • The photon frequency, equal to the difference between adjacent structural frequencies, is proportional to an ionization energy (the energy required to remove an electron from the positional resonance).

The photon frequency and wavelength are not directly proportional to structural energy and, in atoms larger than hydrogen, cannot be calculated by a Rydberg equation.

Proofs that wavelength and frequency are not directly proportional to energy:

  • Spectral wavelengths emitted by sources differing greatly in energy, by a discharge tube in the laboratory, by the sun or by the galactic center, are indistinguishable.
  • In 60 Hertz power transformers the energy of the emitted photons is proportional to the energy of the current (or the magnetic field).

A general explanation for atomic spectra requires an examination of the measured ionization energies and the measured wavelengths of the first four elements larger than hydrogen.

The number of CRC ionization energies (electron Volts in units of kinetic Joule) for each elemental atom larger than hydrogen is equal to the number of nuclear protons; and the number of atomic energies (intrinsic Volts in units of discrete Joule) is also equal to the number of nuclear protons.

While it is true that measured wavelengths are not directly proportional to energy, it is also true that shorter wavelengths are proportional to lower energies and longer wavelengths are proportional to higher energies. For example, ultraviolet photons have shorter wavelengths and lower energies, and visible photons have longer wavelengths and higher energies.

In any atomic spectrum, each measured wavelength corresponds to one specific energy and, in order for each measured wavelength to correspond to one specific energy, the number of wavelengths must either be equal to the number of energies or equal to an integer multiple of the number of energies.

For example, in helium there are two CRC ionization energies (electron Volts in units of kinetic Joule) corresponding to two atomic energies (intrinsic Volts in units of discrete Joule), fourteen measured wavelengths, and one transition between a wavelength proportional to a lower energy and a wavelength proportional to a higher energy.

In the below table, seven lower and seven higher helium atomic energies are in the first row, the measured wavelengths from shortest to longest are in the third row, and the second row is the ratio of the column wavelength divided by the adjacent lower wavelength. This is the definitive test for a transition from a wavelength corresponding to a lower energy to a wavelength corresponding to a higher energy. In the helium atom, the transition wavelength is also detectable by inspection of the previous wavelengths compared to the following wavelengths.

The transitions are less clear in lithium, beryllium, and boron.

In lithium, beryllium and boron the transition wavelengths are not definitively detectable by simple inspection. However, after the higher energy transitions are established by the ratios of the column wavelength divided by the adjacent lower wavelength, the first transition becomes apparent by inspection of the measured wavelengths.

The spectral mechanism hypothesis has been transformed into a general explanation for atomic spectra:

In hydrogen a single electron and proton are engaged in a positional resonance at a discretely exact frequency equal to 3.28E15 Hz. In atoms larger than hydrogen many electrons and protons are engaged in sustained positional resonances, equal to the product of the square of the number of nuclear protons and 3.28E15 Hz, in which CCW quantons are emitted in one direction by electrons and absorbed by nuclear protons, and CW quantons are emitted in the opposite direction by nuclear protons and absorbed by electrons. The positional resonances can be situated at any one of several quantized positions proportional to energy and corresponding to spectral emission and absorption lines. On emission of a photon the energy of the resonance decreases, and the electron drops to a lower energy level. On absorption of a photon the energy of the resonance increases and the electron jumps to a higher energy level.

Part Six

Cosmology

The purpose of this chapter is to disprove cosmic inflation:

  • The radiated intrinsic energy which drives the resonance of constant photon velocity is converted into units of intrinsic redshift per megaparsec.
  • A detailed general derivation of intrinsic redshift (applicable to any galaxy) is made.
  • The final results of the HST Key Project to measure the Hubble Constant are explained by intrinsic redshift.22

The only measurables in the determination of galactic redshifts are the photon wavelength emitted and received in the laboratory, the photon wavelength emitted by a galaxy and received by an observatory, and the ionization energies.

In the following equations Hydrogen-alpha (Balmer-A) wavelengths are used in calculations of intrinsic redshift.

Intrinsic redshift per megaparsec

The photon intrinsic energy radiated per second due to quanton/graviton emissions is equal to the product of 8 and the discrete Planck constant.

The 2015 IAC value for the megaparsec is proportional to the IAC exact SI definition of the astronomical unit (149,597,870,700 m).

The time of flight per megaparsec is equal to one mpc divided by the photon velocity.

The photon intrinsic energy radiated per megaparsec is equal to the product of time of flight per mpc and the photon intrinsic energy radiated per second due to quanton/graviton emissions.

The decrease in photon frequency due to the energy radiated is equal to the photon intrinsic energy radiated per megaparsec divided by the discrete Planck constant.

The increase in photon wavelength due to the photon intrinsic energy radiated is equal to the ratio of the photon velocity divided by decrease in photon frequency.

Note that wavelength and energy are independent thus wavelength cannot be directly determined from energy, but frequency is proportional to energy and the decrease in frequency is proportional to the increase in wavelength.

The intrinsic redshift per megaparsec is equal to the Hydrogen-alpha (Balmer-A) emission wavelength plus the wavelength increase.

General derivation of galactic intrinsic redshift

The distance of the galaxy in units of mpc is that determined by the Hubble Space Telescope Key Project.23 Below, the example calculations are for NGC0300.

The time of flight of photons emitted by NGC0300 is equal to the product of the time of flight per megaparsec and the Hubble Space Telescope Key Project distance of the galaxy.

The photon intrinsic energy radiated by NGC0300 is equal to the product of the time of flight at the distance of NGC0300 and the photon intrinsic energy radiated per second due to quanton/graviton emissions.

The decrease in photon frequency is equal to the photon intrinsic energy radiated by NGC0300 divided by the discrete Planck constant.

The increase in photon wavelength due to the photon intrinsic energy radiated is equal to the ratio of the photon velocity divided by decrease in photon frequency.

The intrinsic redshift at the distance of NGC0300 is equal to the Hydrogen-alpha (Balmer-A) emission wavelength plus the wavelength increase.

Results of the HST Key Project to measure the Hubble Constant

The goal of this massive international project, involving more than fifteen years of effort by hundreds of researchers, was to build an accurate distance scale for Cephied variables and use this information to determine the Hubble constant to an accuracy of 10%.

The inputs to the HST key project were the observed redshifts and the theoretical relativistic expansion rate of cosmic inflation.

In column 2 below, the galactic distances of 22 galaxies in units of mpc are the values determined by the HST Key Project.24

In column 3 below, the galactic distances are expressed in units of meter.

In column 4 below, the time of flight of photons emitted by the galaxy is equal to the distance of the galaxy in meters divided by the photon velocity.

The photon intrinsic energy radiated due to quanton/graviton emissions at the distance of the galaxy is equal to the product of the time of flight of photons emitted by the galaxy and the photon intrinsic energy radiated per second.

The decrease in photon frequency is equal to the photon intrinsic energy radiated by the galaxy divided by the discrete Planck constant.

The increase in photon wavelength due to the photon intrinsic energy radiated is equal to the ratio of the photon velocity divided by decrease in photon frequency.

Above column 5, the intrinsic redshift at the distance of the galaxy is equal to the Hydrogen-alpha (Balmer-A) emission wavelength plus the wavelength increase.

The Hubble parameter for a galaxy, equal to the product of the ratio of 2 omega-2 (converts intrinsic energy to kinetic energy) divided by the time of flight of photons received at the observatory that were emitted by the galaxy, and the ratio of the distance of the galaxy in units of kilometer divided by the distance of the galaxy in units of megaparsec, is denominated in units of km/s per mpc.

The Hubble constant is equal to the sum of the Hubble parameters for the galaxies examined divided by the number of galaxies.

The theory of cosmic inflation has been disproved.

Part Seven

Magnetic levitation and suspension

This chapter was motivated by a video about quantum magnetic levitation and suspension in which superconducting disks containing thin films of YBCO are levitated and suspended on a track composed of neodymium magnet arrays in which a unit array contains four neodymium magnets (two diagonal magnets oriented N→S and the other two S→N).25

An understanding of levitation and suspension by neodymium magnet arrays begins with consideration of the differences between the levitation of a superconducting disk containing thin films of metal oxides and the levitation of thin slice of pyrolytic carbon.

Oxygen is paramagnetic. An oxygen atom is magnetized by the magnetic field of a permanent magnet in the direction of the external magnetic field (for example, a S→N external magnetic field induces a S→N internal field) and reverts to a demagnetized state when the field is removed. The levitation of a superconducting disk requires an array of neodymium magnets and cooling below the critical temperature. In quantum levitation or suspension, the position of the disk is established by holding (pinning) it in the desired location and orientation, and if a pinned disk is forced into a new location and orientation, it remains pinned in the new location.

Carbon is diamagnetic. A carbon atom is magnetized by a magnetic field in the direction opposite to the magnetic field (for example, a N→S external magnetic field induces a S→N internal field) and reverts to a demagnetized state when the field is removed. Magnetic levitation occurs at room temperature, a thin slice of pyrolytic carbon levitates at a fixed distance parallel to the surface of an array of neodymium magnets, and a levitated slice forced closer to the surface springs back to the fixed distance once the force is removed.

Above, levitation of pyrolytic carbon.26

In the levitation of pyrolytic carbon, CCW quantons are emitted by a magnetic North pole and CW quantons are emitted by a magnetic South pole (magnetic emission of quantons is discussed in Part Four).

The number of chirality meshing interactions required to exactly oppose the gravitational force on a thin slice of pyrolytic carbon (or any object) is equal to the local gravitational constant of earth divided by the product of the proton amplitude and the square root of Lambda-bar.

In the above equation, the local gravitational constant of earth (as derived in Part One) is equal to 10 meters per second per second and the proton amplitude (also derived in Part One) is equal to 150 and, (as derived in Part Four) the square root of Lambda-bar is the deflection distance (units of meter) of a single chirality meshing interaction between a quanton and an electron.

The above equation is proportional to energy: the higher the energy, the higher the number of chirality meshing interactions, and the higher the levitation distance; the lower the energy, the lower the number of chirality meshing interactions, and the lower the levitation distance.

Pyrolytic carbon is composed of planar sheets of carbon atoms in which a unit cell is composed of a hexagon of carbon atoms joined by double bonds. Carbon atoms are bonded by either lower energy single bonds proportional to the first ionization energy or higher energy double bonds proportional to the second ionization energy. The measured first and second ionization energies of carbon are 1086.5 and 2352.0 (units of kJ/mol)27.

Due to the discretely exact value of PE charge resonance, in carbon (or any elemental atom) the quanton emission-absorption frequency is equal to 3.28E15 Hz.

The quanton emission frequency of a unit cell of pyrolytic carbon is equal to the product of the discretely exact PE charge resonance frequency of 3.28E15 Hz and the ratio of the second ionization energy of carbon divided by the first ionization energy of carbon.

The levitation distance of a thin slice of pyrolytic carbon (in units of mm) is equal to the product of the ratio of quanton emission frequency of a pyrolytic carbon unit cell divided by six (the number of carbon atoms in a unit cell) times 1000 mm/m and the square root of Lambda-bar.

The oxygen atoms in YBCO oxides are bonded by either lower energy single bonds proportional to the first ionization energy or higher energy double bonds proportional to the second ionization energy. The measured first and second ionization energies of oxygen are 1313.9 and 3388.3 (units of kJ/mol).

The three YBCO metallic oxides are composed of low energy single bonds, high energy double bonds, or single and double bonds. In yttrium oxide (Y2O3), a single bond connects each yttrium atom with the inside oxygen, and a double bond connects each yttrium atom with one of the two outside oxygens. In barium oxide (BaO) the two atoms are connected by a double bond. Copper oxide is a mixture of cupric oxide (copper I oxide) in which a single bond connects each of two copper atoms with the oxygen atom, and cuprous oxide (copper II oxide) in which a double bond connects the copper atom with the oxygen atom.

Voltage is the emission of quantons either directly by the Q-axis of an electron or proton or transversely by a magnetic field from which CCW quantons are emitted by the North pole and CW quantons by the South pole.

The mechanism of magnetic levitation or suspension of a superconducting disk is the absorption of quantons, emitted by a neodymium magnet array, in chirality meshing interactions by electrons in the oxygen atoms of superconductingYBCO oxides resulting in repulsive deflections due to CCW quantons (in quantum levitation) and attractive deflections due to CW quantons (in quantum suspension).

The levitation or suspension distance of a superconductingYBCO oxide is higher (the maximum distance) for double bonded oxides and lower (the minimum distance) for single bonded oxides. The initial position of the YBCO disk is established by momentarily holding (pinning) it in the desired location and orientation at some specific distance from the neodymium magnet array.

In each one-hundredth of a second more than 2E14 chirality meshing interactions establishes the intrinsic energy of electrons within the superconducting oxides. At the same time, at any specific distance above or below the neodymium magnet array the number of quanton interactions, inversely proportional to the square of distance, establishes the availability of quantons to be absorbed at that specific distance. The result is an electrical Stable Balance of the electrons in superconducting oxides at specific distances from the neodymium magnet array, analogous to the gravitational Stable Balance of particles in planets at a specific orbital distance from the sun.

This is the mechanism of pinning in YBCO superconducting disks.

The levitation or suspension distance (units of mm) of a single bonded superconductingYBCO oxide is equal to the product of the ratio of the first ionization energy of oxygen divided by itself, the discretely exact PE charge resonance of 3.28E15 Hz, the square root of Lambda-bar, the ratio of the discrete steric factor divided by 1 (single bond), and 1000 (to convert m to mm).

The levitation or suspension distance (units of mm) of a double bonded superconductingYBCO oxide is equal to the product of the ratio of the second ionization energy of oxygen divided by the first ionization energy of oxygen, the discretely exact PE charge resonance of 3.28E15 Hz, the square root of Lambda-bar, the ratio of the discrete steric factor divided by 2 (double bond), and 1000 (to convert m to mm).

1 Original letter from Isaac Newton to Richard Bentley, 189.R.4.47, ff. 7-8, Trinity College Library, Cambridge, UK http://www.newtonproject.ox.ac.uk

2 https://nssdc.gsfc.nasa.gov/planetary/planetfact.html, accessed Dec 24, 2021

3 Urbain Le Verrier, Reports to the Academy of Sciences (Paris), Vol 49 (1859)

4 Clemence G.M. The relativity effect in planetary motions. Reviews of Modern Physics, 1947, 19(4): 361-364.

5 Eric Doolittle, The secular variations of the elements of the orbits of the four inner planets computed for the epoch 1850 GMT, Trans. Am. Phil. Soc. 22, 37(1925).

6 Michael P. Price and William F. Rush, Nonrelativistic contribution to mercury’s perihelion precession. Am. J. Phys. 47(6), June 1979.

7 Wikimedia, by Daderot made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication, location National Museum of Nature and Science, Tokyo, Japan.

8 Illustration from 1908 Chambers’s Twentieth Century Dictionary. Public domain.

9 Wikimedia “Sine and Cosine fundamental relationship to Circle and Helix” author Tdadamemd.

10 By Jordgette – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=9529698

11 By Ebohr1.svg: en:User:Lacatosias, User:Stanneredderivative work: Epzcaw (talk) – Ebohr1.svg, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=15229922

12 https://www.nobelprize.org/prizes/physics/1927/summary/

13 O. Stern, Z. fur Physik, 7, 249 (1921), title in English: “A way to experimentally test the directional quantization in the magnetic field”.

14 Ronald G. J. Fraser, Molecular Rays, Cambridge University Press, 1931.

15 The Molecular Beam Resonance Method for Measuring Nuclear Magnetic Moments. II Rabi, S Millman, P Kusch, JR Zacharias – Physical review, 1939 – APS

16 INDC: N. J. Stone 2014. Nuclear Data Section, International Atomic Energy Agency, www-nds.iaea.org/publications

17 “Quantum theory yields much, but it hardly brings us close to the Old One’s secrets. I, in any case, am convinced He does not play dice with the universe.” Letter from Einstein to Max Born (1926).

18 “That gravity should be innate inherent & essential to matter so that one body may act upon another at a distance through a vacuum without the mediation of anything else by & through which their action or force may be conveyed from one to another is to me so great an absurdity that I believe no man who has … any competent faculty of thinking can ever fall into it.” Original letter from Isaac Newton to Richard Bentley, 189.R.4.47, ff. 7-8, Trinity College Library, Cambridge, UK http://www.newtonproject.ox.ac.uk

19 Ionization energies of the elements (data page), https://en.wikipedia.org/

20 How to determine the range of acceptable results for your calorimeter, Bulletin No. 100, Parr Instrument Company, www.parrinst.com.

21 See www.wikipedia.org, www.hyperphysics.com, www.shutterstock.com

22 Final Results from the Hubble Space Telescope Key Project to Measure the Hubble Constant, Astrophysical Journal 0012-376v1, 18 Dec 2000.

23 Page 60, Final Results from the Hubble Space Telescope Key Project to Measure the Hubble Constant, Astrophysical Journal 0012-376v1, 18 Dec 2000.

24 Page 60, Final Results from the Hubble Space Telescope Key Project to Measure the Hubble Constant, Astrophysical Journal 0012-376v1, 18 Dec 2000.

25 “Dr. Boaz Almog: Quantum Levitation” https://www.youtube.com/watch?v=4HHJv8lPERQ .

26 This image has been released into the public domain by its creator, Splarka. https://commons.wikimedia.org/wiki/File:Diamagnetic_graphite_levitation.jpg

27 Ionization energies of the elements (data page), https://en.wikipedia.org/

What Priorities For First Canadian Minister of Artificial Intelligence?

Canada is great at AI development, but what should the country’s first Minister for Artificial Intelligence make his key priorities? University of Waterloo’s Anindya Sen and the C.D Howe Institute’s Rosalie Wyonch offer strong insight — and geek out a bit about the economics-oriented nature of machine learning algorithms.

An Intelligent AI Policy for Canada

Audio Only Version

The Inimitable Mezger Engine

Take it from our friends at rennlist, Porsche has built some truly remarkable engines over the years. The air-cooled 911/83 engine that powered the 1973 911 2.7L Carrera RS is just one example. But if you were asked to go on and list the ten all-time greatest Porsche engines there is a good chance the list would be dominated by various Mezger engines.

The 12-cylinder found in the Le Mans-winning Porsche 917? That’s a Mezger. The 3.6L flat six in the 996 GT3? That’s a Mezger. The 4.0L in the 997 GT3 RS 4.0? That’s a Mezger.

How about going all the way back to the original 901/911 engine? Yup, that’s a Mezger.

But what is a Mezger engine, and why are they so special? That is what we are going to discuss here today. We have come up with 9 reasons why the Mezger engine is so special. And there is no other place to begin the discussion than the legendary man behind these engines, Hans Mezger.

1. Hans Mezger

A single slide can in no way capture all that the legendary Hans Mezger accomplished. He joined Porsche back in October of 1956. He loved Porsche sports cars, but his first job was working on diesel engine development. In 1960, he began to work on the type 753 flat-eight engine for Porsche’s first Formula 1 car. Soon after he designed the 6-cylinder boxer engine for the 901/911. He was then promoted to the head of race car design. He was responsible for the 917 and the 12-cylinder engine that powered it to Porsche’s first Le Mans victory in 1970. He then was responsible for the turbocharged 917/10 and 917/30 cars that dominated Can Am. He designed and developed the six-cylinder turbo engines for the Type 935 and 936 race cars.

Mezger designed the 1.5L V6 engine known as the TAG Turbo that powered the McLaren Formula 1 cars to championships in 1984, 1985 and 1986. His engines would eventually be found in the most performance-oriented Porsche road cars such as the 996 GT3, GT2 and Turbo. Mezger remained closely connected with the Porsche brand until he passed away on June 10, 2020, at the age of 90.

2. Motorsport Pedigree

Mezger built engines for the most demanding races in the world. His engines that were put into Porsche road cars have the same engineering approach. These engines are designed for long-term high performance. They are essentially overbuilt for road use. These engines were not designed to meet a certain price point. They were designed to provide the best performance. There were no corners cut with any Mezger engine.

3. Birth of the GT3

Many people view the 911 GT3 models as the pinnacle of the 911 range. One of the main reasons why is because of the track-focused, high-revving flat-six engine out back. It all started with the M96.79 engine found in the 996 GT3. The European market got the GT3 a few years before us and had the M96.76 engine, but the point is the same. The GT3 legend began in large part because of the incredible engine that powered it. This dry sump engine could rev to 8,200 rpm all day long. The engine was derived from the Porsche 911 GT1-9,8 which happened to win a little race called the 24 Hours of Le Mans. The street version of this engine is nearly bulletproof,f and the GT3 legend was born.

4. Turbocharged Versions

If the GT3 was just not powerful enough for you, Porsche had a solution. The GT2 and Turbo also used Mezger engines, but with a pair of turbochargers. They are not as high-revving as the normally aspirated units, but they offer more power and a lot more torque. And these engines are just as reliable.

5. Reliability

The Mezger engines are not just more powerful but also more reliable. The knock on the M96 and M97 engine series has long been the IMS bearing. But the Mezger versions don’t have the same design. Instead, they use plain bearings that are pressure-fed engine oil for lubrication. These bearings don’t fail. That alone makes the Mezger significantly more reliable.

6. Sound

Even if these engines were not more durable and powerful, people would buy them for their sound alone. It is not just their high-revving nature in naturally aspirated form. But the design of the engine itself, with features such as dual timing chains that give these engines a more characterful sound. They are more gravely and “motorsporty” sounding than the non-Mezger engines.

7. Power Upgrades

These engines were overbuilt and as such, are typically able to comfortably handle more power if you want to modify them. The turbo versions can easily be tuned to reliably make more power. Of course, every engine has its limitations, but the Mezger engine is robust enough to make more power without hurting reliability.

8. The 997 GT3 RS 4.0

Many people consider the 997 GT3 RS 4.0 to be the best Porsche 911 road car of all time. It just so happens to be equipped with the last Mezger engine. A 4.0L jewel making nearly 500 naturally aspirated horsepower. The engine revs to 8,500 rpm and has more character in it than an entire truckload of new 992.2 Carreras. The 4.0L marked the end of an era. It is the last and possibly the best road-going Mezger engine ever produced.

9. Rarity and Desirability

Not every Porsche got a Mezger engine. Technically, all the air-cooled 911s have a Mezger-designed engine, but they have been out of production for over a quarter of a century now. Only a small percentage of water-cooled Porsche engines were a Mezger design. And Porsche is not building any more of them. So, what is out there today is all that will ever be out there. These engines are found in the most desirable Porsche models, and these cars are collectible today and will continue to be collectible for the foreseeable future. If you buy a Porsche with a Mezger engine today, the chances are good that it will be worth the same or even more tomorrow. For the Silo, Joe Kucinski.

Images: Porsche

>>Join the conversation about the Mezgar engine right here via our friends at Rennlist.com.

AI Tinkerers Take Note -Effective Prompting Can Build Actual Products

Hello AI Tinkerers and welcome to the latest Sci-Tech article here at The Silo. Get ready, You will want to pay attention because the spotlight is on this Dude because he knows how to get around ‘bad ai prompting’. Just recently, he has helped spin out 40 startups using one core skill. Can you guess which one? Yep. Prompting.

In the One-Shot video below, Kevin Leneway breaks down his real workflow for shipping AI products fast — using markdown checklists, agent coding, rubric-based UI design, and zero Figma.

“I don’t need Figma. I just prompt my way to a working front end.” — Kevin Leneway

While most people are still asking ChatGPT to write code snippets, Kevin is building full-stack products using nothing but prompts. In this One-Shot episode, he reveals the exact system he’s used to launch over 40 startups at Pioneer Square Labs. We break down:

  • How he writes BRDs and PRDs that don’t suck
  • Why vibe coding fails and how to actually use AI agents
  • The markdown checklist that replaces a product team
  • How to go from idea to working app with zero context switching
  • His open-source starter kit that makes Cursor and Claude 3.5 feel like magic

“I’ve helped launch six startups including Singlefile (singlefile.io, $24M raised), Recurrent (recurrentauto.com, $24M raised), Joon (joon.com, $9.5M raised), Gradient (gradient.io, $3.5M raised), Genba (genba.ai, acquired May 2022) and Enzzo (enzzo.ai, $3M raised).”

If you’re a builder, this will change how you work. No gimmicks. Just a ruthless focus on speed, clarity, and shipping. Watch now. Learn the system. Steal it. For the Silo, Joe at aitinkerers.org

Featured image- DALL·E robot dressed like shakespeare – AllAboutLean.com.

Dupe Culture & Digital Deception Inside AI-Driven Counterfeit Boom

While generative AI transforms how Americans shop, it’s also quietly powering a counterfeit crisis now spiraling out of control. A groundbreaking new report from Red Points and OnePoll, The Counterfeit Buyer Teardown, reveals that AI is no longer just helping consumers find the best deals—it’s helping them find fakes. From influencer-driven “dupe culture” to hyper-realistic fake storefronts, the study exposes a booming underground economy that’s been supercharged by technology. With 28% of counterfeit buyers now using AI tools to seek out knock-offs, and fraudulent social media ads spiking 179% in just one year, the findings deliver a wake-up call for brands, regulators, and shoppers alike. Red Points execs are available to break down the data, discuss solutions, and explain why this rapidly evolving trend is both a technological and ethical crisis for the digital marketplace. Interest here as we hope?

AI Supercharging U.S.and Other E-Commerce Counterfeit Crisis


Courtesy of Red Points 3.jpg

An explosive new report, “The Counterfeit Buyer Teardown, ” paints a concerning picture of a rapidly evolving and increasingly sophisticated counterfeit goods market, driven by a new factor: Artificial Intelligence. Forget the back alleys; findings from the research—conducted by market research firm OnePoll and AI company Red Points in February 2025—highlight that the future of fakes is digital, AI-assisted, and alarmingly mainstream. 

The convergence of technology, social media, and shifting consumer mindsets is reshaping e-commerce—and not always for the better. As AI accelerates both the spread and appeal of counterfeit goods, the challenge is no longer just spotting fakes—it’s confronting a counterfeit economy that’s growing smarter, faster, and harder to contain.

“As counterfeiters adopt advanced tools like AI, the fight against fakes is becoming more complex and more urgent,” said Laura Urquizu, CEO & President of Red Points. “We’re now seeing AI shape both the threat and the solution. In 2024 alone, our firm detected 4.3 million counterfeit infringements online—an alarming 15% increase year-over-year.”

info2.png

Alarming indeed. Here are 5 key revelations from the study.

1. AI is the New Enabler of Counterfeiting – A Two-Sided Threat:

  • The Counterfeiters’ Edge: AI is dramatically lowering the barrier to entry for bad actors. They can now mimic brand listings, and impersonate social media accounts with unprecedented ease and speed. They can also effortlessly create professional-looking fake websites—a situation that, according to Red Points’ data, is projected to surge 70% in 2025.This isn’t just about cheap knock-offs anymore; it’s about sophisticated deception at scale.
  • The Consumers’ Assistant: Shockingly, 28% of online shoppers who bought fake goods used AI tools to find them. This isn’t a fringe behavior; it’s a growing trend, especially among Gen X, suggesting consumers are actively leveraging AI in their pursuit of cheaper alternatives. This fundamentally shifts the narrative – it’s not just about being tricked; some are actively seeking fakes with AI’s help.

2. Accidental Counterfeiting is a Major Problem – Trust Signals are Being Hijacked:

  • 1 in 4 luxury counterfeit purchases are unintentional. This shatters the perception that buyers knowingly seek out high-end fakes. Realistic pricing, secure payment promises, and active (but fake) social media presence are successfully deceiving consumers. AI-generated legitimacy cues are becoming indistinguishable from the real deal.
  • Brands are Paying the Price for These Mistakes: A staggering one in three shoppers stop buying from the genuine brand after an accidental counterfeit experience. This highlights the significant damage to brand loyalty and future sales, even when the brand isn’t directly selling the fake. High-trust categories like luxury and toys are particularly vulnerable.

3. The “Dupe Economy” is Real and Influencer-Driven:

  • Nearly a third (31%) of intentional counterfeit buyers were swayed by influencer promotions. Social media is driving the demand for “dupes” – budget-friendly replicas. Authenticity is taking a backseat to price and perceived identical appearance, especially among younger demographics.
  • This isn’t just about saving money; it’s a shift in consumer mindset. The report suggests a growing acceptance of fakes as clever alternatives, fueled by social validation and influencer endorsements.

info1.jpg

4. Marketplaces Remain Key, But Social Media and Fake Websites are Surging:

  • Marketplaces (both US and China-based) are still the primary channels for counterfeit purchases. However, fake websites (accounting for 34% of unintentional purchases) and social media are rapidly gaining ground as sophisticated avenues for distribution, amplified by AI’s ability to create convincing facades.
  • Social media ads redirecting to infringing websites saw a massive 179% year-over-year growth. This highlights the increasing sophistication of counterfeiters in leveraging advertising platforms to drive traffic to their fake storefronts.

5. Younger Generations are More Vulnerable in Key Categories:

  • Millennials are significantly more likely to have their personal data stolen after purchasing from fake websites (44% vs. 34% average). This suggests a higher susceptibility to sophisticated phishing scams disguised as legitimate e-commerce sites.
  • Gen Z and Millennials are 2-4 times more likely to accidentally purchase counterfeit luxury goods and toys compared to Baby Boomers. Their online savviness might be a double-edged sword, making them more exposed to deceptive listings.

This study serves as both a consumer alert and a brand wake-up call. The rise of AI as a tool for both counterfeiters and consumers is a seismic shift that demands urgent attention. With compelling data and a clear-eyed look at accidental purchases, influencer-driven “dupe culture,” and the growing sophistication of fake storefronts, the findings paint a stark warning for the future of online shopping. 

“Counterfeiting poses a serious and evolving threat to innovative businesses and consumer safety,” notes Piotr Stryszowski, Senior Economist at the Organization for Economic Co-operation and Development (OECD). “Criminals constantly adapt, exploiting new technologies and shifting market trends—particularly in the online environment. To effectively counter this threat, policymakers need detailed, up-to-date information. This study makes an important contribution to our understanding of how counterfeiters operate and how consumers behave online.”
info3.png
Ultimately, The Counterfeit Buyer Teardown report underscores a new reality: counterfeiting is no longer confined to shady sellers or easily spotted scams—it’s embedded in the very technologies shaping modern commerce. As AI continues to blur the lines between real and fake, the pressure is on for brands, platforms, and policymakers to respond with equal speed and sophistication. Combating this growing threat will require more than just awareness—it demands collaboration, innovation, and a commitment to restoring trust in the digital marketplace before the counterfeit economy becomes the new normal. For the Silo, Merilee Kern.

Merilee Kern, MBA is a brand strategist and analyst who reports on industry change makers, movers, shakers and innovators: field experts and thought leaders, brands, products, services, destinations and events. Merilee is a regular contributor to the Silo. Connect with her at 
www.TheLuxeList.com and LinkedIN www.LinkedIn.com/in/MerileeKern

Source: https://get.redpoints.com/the-counterfeit-buyer-teardown-2025

New Audiophile Equipment Guide Is Ten Volumes Comprehensive

Boulder, Colorado, March, 2025 – PS Audio announces the release of The Audiophile’s Guide, a comprehensive 10-volume series on every aspect of audio system setup, equipment selection, analog and digital technology, speaker placement, room acoustics, and other topics related to getting the most musical enjoyment from an audio system. Written by PS Audio CEO Paul McGowan, it’s the most complete body of high-end audio knowledge available anywhere.

The Audiophile’s Guide hardcover book series is filled with clear, practical wisdom and real-life examples that guide readers into getting the most from their audio systems, regardless of cost or complexity. The book includes how-to tips, step-by-step instructions, and real-world stories and examples including actual listening rooms and systems. Paul McGowan noted, “think of it as sitting down with a knowledgeable friend who’s sharing hard-won wisdom about how to make music come alive in your home.”

The 10 books in the series include:

The Stereo – learn the essential techniques that transform good systems into great ones, including speaker placement, system matching, developing critical listening skills, and more.

The Loudspeaker – even the world’s finest loudspeakers will not perform to their potential without proper setup. Master the techniques that help speakers disappear, leaving the music to float in three-dimensional space.

Analog Audio – navigate the world of turntables, phono cartridges, preamps and power amplifiers, and vacuum tubes, and find out about how analog sound continues to offer an extraordinary listening experience.

Digital Audio – from sampling an audio signal to reconstructing it in high-resolution sound, this volume explains and demystifies the digital audio signal path and the various technologies involved in achieving ultimate digital sound quality.

Vinyl – discover the secrets behind achieving the full potential of analog playback in this volume that covers every aspect of turntable setup, cartridge alignment, and phono stage optimization.

The Listening Room – the space in which we listen is a critical yet often overlooked aspect of musical enjoyment. This volume tells how to transform even challenging spaces into ideal listening environments.

The Subwoofer – explore the world of deep bass reproduction, its impact on music and movies, and how to achieve the best low-frequency performance in any listening room.

Headphones – learn about dynamic, planar magnetic, electrostatic, closed-back and open-air models and more, and how headphones can create an intimate connection to your favorite music.

Home Theater – enjoy movies and TV with the thrilling, immersive sound that a great multichannel audio setup can deliver. The book explains how to bring the cinema experience home.

The Collection – this volume distills the knowledge of the above books into everything learned from more than 50 years of Paul McGowan’s experience in audio. Like the other volumes in the series, it’s written in an accessible style yet filled with technical depth, to provide the ultimate roadmap to audio excellence and musical magic.

Volumes one through nine of The Audiophile’s Guide are available for a suggested retail price of $39.99 usd , with Volume 10, The Collection, offered at $49.99 usd. In addition, The Audiophile’s Guide Limited Run Collectors’ Edition is available as a deluxe series with case binding, with the books presented in a custom-made slipcase. Each Collectors’ Edition set is available at $499.00 usd with complimentary worldwide shipping.

About PS Audio
Celebrating 50 years of bringing music to life, PS Audio has earned a worldwide reputation for excellence in manufacturing innovative, high-value, leading-edge audio products. Located in Boulder, Colorado at the foothills of the Rocky Mountains, PS Audio’s staff of talented designers, engineers, production and support people build each product to deliver extraordinary performance and musical satisfaction. The company’s wide range of award-winning products include the all-in-one Sprout100 integrated amplifier, audio components, power regenerators and power conditioners.
 
www.psaudio.com

For the Silo, Frank Doris.

A Geek’s Guide To Microfiber Towels

Microfibers were invented by Japanese textile company Toray in 1970, but the technology wasn’t used for cleaning until the late 1980s.

The key, as the name suggests, is in the fiber: Each strand is really tiny—100 times finer than human hair—which allows them to be packed densely on a towel. That creates a lot of surface area to absorb water and pick up dust and dirt. Plus, microfibers have a positive electric charge when dry (you might notice the static cling on your towels), which further helps the towel to pick up and hold dirt. “They tend to trap the dirt in but not allow it to re-scratch the finish,” explains professional concours detailer Tim McNair, who ditched old T-shirts and terry cloths for microfibers back in the 1990s.

These days, the little towels are ubiquitous and relatively cheap, but in order to perform wonders consistently, they need to be treated with respect. Below, a miniature guide to microfibers.

Care for Your Towels: Dos and Don’ts

“They’re just towels,” you might say to yourself. But if you want them to last and retain their effectiveness, microfiber towels need more care than your shop rags:

DO: Keep your microfiber towels together in a clean storage space like a Rubbermaid container. They absorb dirt so readily that a carelessly stored one will be dirty before you even use it.

DON’T: Keep towels that are dropped on the ground. It’s hard to get that gunk out and it will scratch your paint.

DO: Reuse your towels. “I have towels that have lasted 15 years,” says McNair. That said, he recommends keeping track of how they’re used. “I’ll use a general-purpose microfiber to clean an interior or two, and I’ll take them home and wash them. After about two, three washings, it starts to fade and get funky, and then that becomes the towel that does lower rockers. Then the lower rocker towel becomes the engine towel. After engines, it gets thrown away.”

DON’T: Wash your microfibers with scented detergent, which can damage the fibers and make them less effective at trapping dirt. OxiClean works great, according to McNair.

DO: Separate your microfibers from other laundry. “Make sure that you keep the really good stuff with the really good stuff and the filthy stuff with the filthy stuff,” says McNair.

DO: Air-dry your towels. Heat from the dryer can damage the delicate fibers. If you’re in a rush, use the dryer’s lowest setting.

How Do You Know Microfiber Is Split Or Not?

A widespread misunderstanding is that you can “feel” if a microfiber towel is made from split microfiber or not by stroking it with your hand. This is false!

The theory is that if it feels like the towel “hooks” onto tiny imperfections on dry  unmoisturized hands, this is because the fibers are split and they microscopically grab your skin. Although this is partially true, you cannot feel split microfiber “hook” onto your skin. These microscopic hooks are way too small to feel, but do generate a general surface resistance called “grab”. Yet, this is not the “individual” hooking sensation you feel when you touch most microfiber towels. It’s the tiny loops in loop-woven microfiber that are large enough to actually feel grabbing imperfections on your hands (minute skin scales). 

Try it for yourself: gently stroke a loop-weave microfiber towel of any kind, split or not. If your hands are dry and unmoisturized, you will feel the typical “hooking” sensation most people hate. It’s simply the loops that catch around the scales on your skin like mini lassos. Take a picture of the microfiber material with your smartphone, zoom in and you can clearly see the loops.

Now try stroking a cut microfiber towel which is not loop-woven, split or not, and it will not give that awful hooking sensation. If you take a picture of this material, you will see a furry surface without those loops. Because there are no loops, it won’t “hook”.

Now you know the truth: it’s the loops that latch onto your skin when you touch a microfiber towel, regardless if the towel is split microfiber or not. Tightly woven microfiber towels without pile (e.g. glass towels) can also have the “hooking” effect, caused by the way their fibers are woven, but less pronounced than loop weave towels.

Another misunderstanding is that a towel that is made of non-split microfiber will “push away” water and is non-absorbent. This also is not true!

Although a non-split microfiber fiber is not absorbent, water is still caught in between the fibers. You can do the test: submerge a 100% polyester fleece garment (check the label), which is always non-split fiber, in a bucket of water and take it out after about 10 sec. Wring it out over an empty bucket and you’ll see that it holds quite a bit of water, meaning it is absorbent.

So, another myth is busted: non-split microfiber can’t be determined simply by testing if it holds water. You can however test how much water it holds. Compare it to a similar dry-weight towel that is known to be split 70/30 microfiber: Submerge both in a bucket of water. If they hold about the same amount of water, they are both split microfiber. If the 70/30 towel holds more than twice as much water, the test towel is more than likely non-split material.   

How do you know if Microfiber is split or not?

A widespread misunderstanding is that you can “feel” if a microfiber towel is made from split microfiber or not by stroking it with your hand. This is false!

The theory is that if it feels like the towel “hooks” onto tiny imperfections on dry  non-moisturized hands, this is because the fibers are split and they microscopically grab your skin. Although this is partially true, you cannot feel split microfiber “hook” onto your skin. Our friends at classiccarmaintenance.com have more to say about this- “these microscopic hooks are way too small to feel, but do generate a general surface resistance called “grab”.” Yet, this is not the “individual” hooking sensation you feel when you touch most microfiber towels. It’s the tiny loops in loop-woven microfiber that are large enough to actually feel grabbing imperfections on your hands (minute skin scales). 

Try it for yourself: gently stroke a loop-weave microfiber towel of any kind, split or not. If your hands are dry and unmoisturized, you will feel the typical “hooking” sensation most people hate. It’s simply the loops that catch around the scales on your skin like mini lassos. Take a picture of the microfiber material with your smartphone, zoom in and you can clearly see the loops.

Now try stroking a cut microfiber towel which is not loop-woven, split or not, and it will not give that awful hooking sensation. If you take a picture of this material, you will see a furry surface without those loops. Because there are no loops, it won’t “hook”.

Now you know the truth: it’s the loops that latch onto your skin when you touch a microfiber towel, regardless if the towel is split microfiber or not. Tightly woven microfiber towels without pile (e.g. glass towels) can also have the “hooking” effect, caused by the way their fibers are woven, but less pronounced than loop weave towels.

Another misunderstanding is that a towel that is made of non-split microfiber will “push away” water and is non-absorbent. This also is not true!

Although a non-split microfiber fiber is not absorbent, water is still caught in between the fibers. You can do the test: submerge a 100% polyester fleece garment (check the label), which is always non-split fiber, in a bucket of water and take it out after about 10 sec. Wring it out over an empty bucket and you’ll see that it holds quite a bit of water, meaning it is absorbent.

So, another myth is busted: non-split microfiber can’t be determined simply by testing if it holds water. You can however test how much water it holds. Compare it to a similar dry-weight towel that is known to be split 70/30 microfiber: Submerge both in a bucket of water. If they hold about the same amount of water, they are both split microfiber. If the 70/30 towel holds more than twice as much water, the test towel is more than likely non-split material.   

Tim’s Towels

The budget pack of microfiber towels will serve you fine, but if you want to go down the detailing rabbit hole, there’s a dizzying variety of towel types that will help you do specific jobs more effectively. Here’s what McNair recommends:

General Use: German janitorial supply company Unger’s towels are “the most durable things I’ve ever seen,” says McNair.

Drying: Towels with a big heavy nap are great for drying a wet car (but not so great for taking off polish).

Griot's Garage blanket towel
Griot’s Extra-Large Edgeless Drying towel, $45usd/ $65.09cad Griot’s Garage

Polishing: Larger edgeless towels are good at picking up polishing compound residue without scratching the paint.

Wheels and other greasy areas: This roll of 75 microfiber towels from Walmart is perfect for down-and-dirty cleaning, like wire wheels. When your towel gets too dirty, throw it away and rip a new one off the roll.

Glass: There are specific two-sided towels for glass cleaning. One side has a thick nap that is good for getting bugs and gunk off the windshield. The other side has no nap—just a smooth nylon finish—that’s good for a streak-free final wipe down.

Griot's Garage glass towels
Griot’s Dual Weave Glass Towels, Set of 4, $20usd/ $28.93 cad Griot’s Garage

High Speed Toronto Quebec Rail Plan Underway

A special ‘Study in Brief’ via our friends at cdhowe.org

  • This study estimates the economic benefits of a new, dedicated passenger rail link in the Toronto-Québec City corridor, either with or without high-speed capabilities.
  • Cumulatively, in present value terms over 60 years, economic benefits are estimated to be $11-$17 billion under our modelled conventional rail scenarios, and $15-$27 billion under high-speed rail scenarios.
  • This study estimates economic benefits, rather than undertaking a full cost-benefit analysis. The analysis is subject to a range of assumptions, particularly passenger forecasts.

Introduction

Canada’s plans for faster, more frequent rail services in the Toronto-Québec City corridor are underway.

In 2021, the federal government announced plans for a new, high frequency, dedicated passenger rail link in the Toronto-Québec City corridor. More recently, the government has considered the potential for this passenger line to provide high-speed rail travel. These two options are scenarios within the current proposed rail project, which VIA-HFR has named “Rapid Train.” This paper analyzes the economic benefits of the proposed Rapid Train project, considering both scenarios, and by implication the costs of forgoing them.

The project offers substantial economic and social benefits to Canada. At a time when existing VIA Rail users must accept comparatively modest top speeds (by international standards) and regular delays, this project offers a dedicated passenger line to solve network capacity constraints. With Canada’s economy widely understood to be experiencing a productivity crisis (Bank of Canada 2024), combined with Canada seeking cost-effective approaches to reducing harmful CO2 emissions, the project offers both productivity gains and lower-emission transportation capacity. There are, in short, significant opportunity costs to postponing or not moving ahead with this investment and perpetuating the status quo in rail service.

The Toronto-Québec City corridor, home to more than 16 million people (Statistics Canada 2024) and generating approximately 41 percent of Canada’s GDP (Statistics Canada 2023), lacks the sort of fully modernized passenger rail service provided in comparable regions worldwide. For example, Canada is the only G7 country without high-speed rail (HSR) – defined by the International Union of Railways (UIC) as a train service having the capability to reach speeds of 250 km per hour. Congestion has resulted in reliability (on time performance) far below typical industry standards. Discussion about enhancing rail service in this corridor has persisted for decades. But delays come with opportunity costs. This Commentary adds up those costs in the event that Canada continues to postpone, or even abandons, investment in enhanced rail services.

The existing rail infrastructure in the Toronto-Québec City corridor was developed several decades ago and continues to operate within parameters set during that time. However, significant changes have occurred since then, including higher population growth, economic development, and shifting transportation patterns. Rising demand for passenger and freight transportation – both by rail and other modes – has increased pressure on the region’s transportation network. There is increasing need to explore the various mechanisms through which enhancements to rail service could affect regional economic outcomes.

According to Statistics Canada (2024), the Toronto-Québec City corridor is the most densely populated and heavily industrialized region in Canada. This corridor is home to 42 percent of the country’s total population and comprises 43 percent of the national labor market. Transport Canada’s (2023) projections indicate that by 2043, an additional 5 million people will reside in Québec and Ontario, marking a 21 percent increase from 2020. This population growth will comprise more than half of Canada’s overall population increase over the period. As the population and economy continue to expand, the demand for all modes of transportation, including passenger rail, will rise. The growing strain on the transportation network highlights the need for infrastructure improvements within this corridor. In 2019, passenger rail travel accounted for only 2 percent of all trips in the corridor, with the vast majority of journeys (94 percent) undertaken by car (VIA-HFR website). This distribution is more skewed than in other countries with high-speed rail. For example, between London and Paris, aviation capacity has roughly halved since the construction of a high-speed rail link (the Eurostar) 25 years ago, which now has achieved approximately 80 percent modal share (Morgan et al. 2025, OAG website 2019). As such, there is potential for rail to have a greater modal share in Canada, particularly as the need for sustainable and efficient transportation solutions becomes more pressing in response to population growth and environmental challenges.

In practical terms, the cost of not proceeding with the Rapid Train project can be estimated as the loss of economic benefits that could have been realized if the project had moved forward. It should be noted that this study does not undertake a full cost-benefit analysis (CBA) of the proposed investment. Rather, it examines the various economic advantages associated with introducing the proposed Rapid Train service in the Toronto-Québec City corridor. Specifically, it analyzes five key dimensions of economic impact: rail-user benefits, road congestion reduction, road network safety improvements, agglomeration effects (explained below), and emission savings. The first three benefits primarily impact individuals who would have travelled regardless, or were induced to travel by rail or car. Agglomeration benefits extend to everyone living in the corridor, while emission savings contribute to both national and international efforts to combat climate change. In each of these ways, enhanced rail services can contribute to regional economic growth and sustainability. By evaluating these aspects, this study aims to develop quantitative estimates of the benefits that enhanced rail services could bring to the economy and society, and by doing so indicate the potential losses that could result from forgoing the proposed rail investment.

Rail user benefits constitute the most direct economic gains. Through faster rail transport with fewer delays, rail users experience reduced travel times, increased service reliability, and improved satisfaction. The Rapid Train project provides rail-user benefits because dedicated passenger tracks would remove the need to give way to freight transport, thus reducing delays. The Rapid Train project would see further benefits with faster routes reducing travel time.

Congestion effects extend beyond individual transportation choices to influence broader economic activity. This study considers how enhanced rail services might affect road congestion levels in key urban centres and along major highways within the corridor. Road network safety is a further aspect of the economic analysis in this study, as modal shift from road to rail could reduce road traffic accidents and their associated economic costs.

Agglomeration economies are positive externalities that arise from greater spatial concentration of industry and business, resulting in lower costs and higher productivity. Greater proximity results in improved opportunities for labour market pooling, knowledge interactions, specialization and the sharing of inputs and outputs (Graham et al. 2009). Improved transportation (both within and between urban areas) can support agglomeration economies by improving connectivity, lowering the cost of interactions and generating productivity gains.1 Supported by academic literature (Graham 2018), these wider economic benefits are included within international transportation appraisal guidance (Metrolinx 2021, UK Department for Transport 2024). Agglomeration effects from enhanced connectivity offer economic benefits distinct from (and additional to) benefits for rail users.

Environmental considerations, particularly emission savings, constitute a further economic benefit. This analysis examines potential reductions in transportation-related emissions and their associated economic value, including direct environmental costs. This examination includes consideration of how modal shifts might influence the corridor’s overall carbon footprint and its associated economic impacts.

The methodology employed in this analysis draws from established economic assessment frameworks while incorporating recent developments in transportation economics. The study utilizes data from VIA-HFR, Statistics Canada, and several other related studies and research papers. Where feasible, the analysis utilizes assumptions that are specific to the Toronto-Québec City corridor, recognizing its unique characteristics, economics, and demographic patterns.

The findings presented here may facilitate an understanding of how different aspects of rail service enhancement might influence economic outcomes across various timeframes and stakeholder groups. This analysis acknowledges that while some benefits may be readily quantifiable, others involve more complex, long-term economic relationships that require careful consideration within the specific context of the Toronto-Québec City corridor.

Based on our modelling and forecasts, the proposals for passenger rail infrastructure investment in the Toronto-Québec City corridor would present substantial economic, environmental, and social benefits (see Table 4 in the Appendix for a full breakdown, by scenario). Our scenario modelling is undertaken over a 60-year period, with new services coming on-stream from 2039, reported in 2024 present value terms. The estimated total of present value benefits ranges from $11 billion in the most conservative passenger growth scenario, to $27 billion in the most optimistic growth scenario. Cumulatively, in present value terms, economic benefits are estimated to be $11-$17 billion under our modelled conventional rail scenarios, and larger – $15-$27 billion – under high-speed rail scenarios. This is subject to a range of assumptions and inputs, including passenger forecasts.

These estimated benefits are built-up from several components. User benefits – stemming from time savings, increased reliability, and satisfaction with punctuality – are the largest component, with an estimated value of $3.1-$9.2 billion. Economic benefits from agglomeration effects (leading to higher GDP) are estimated at $2.6-$3.9 billion, while environmental benefits from reduced greenhouse gas emissions are estimated at $2.6-$7.1 billion. Additional benefits include reduced road congestion, valued between $2.0-$5.9 billion, and enhanced road safety, which adds an estimated $0.3-$0.8 billion. In addition, further sensitivity analysis has been undertaken alongside the main passenger growth scenarios.

Overall, the findings in this study demonstrate and underscore the substantial economic benefits of rail investment in the Toronto-Québec City corridor, and the transformative potential impact on the Toronto-Québec City region from economic growth and sustainable development.

Finally, there are several qualifications and limitations to the analysis in this study. It considers the major areas of economic benefit rather than undertaking a full cost-benefit analysis or considering wider opportunity costs, such as any alternative potential investments not undertaken. It provides an economic analysis, largely building on VIA-HFR passenger forecasts, rather than a full bottom-up transport modelling exercise. Quantitative estimates are subject to degrees of uncertainty.

The Current State of Passenger Rail Services in Ontario and Québec

The Toronto-Québec City corridor is the most densely populated and economically active region of the country. Spanning major urban centres such as Toronto, Ottawa, Montreal, and Québec City, this corridor encompasses more than 42 percent of Canada’s population and is a vital artery for both passenger and freight transport. Despite the significance of the corridor and the economic potential it holds, passenger rail services in Ontario and Québec face numerous challenges, and their overall state remains a topic of debate.

Passenger rail services in the region are primarily provided by VIA Rail, the national rail operator, along with commuter rail services like GO Transit in Ontario and Exo in Québec. VIA Rail operates intercity passenger trains connecting major cities in the Toronto-Québec City corridor, offering an alternative to driving or flying. VIA Rail’s most popular routes include the Montreal-Toronto and Ottawa-Toronto services, which run multiple times per day and serve business travellers, tourists, and daily commuters.

In addition to VIA Rail’s existing medium-to-long-distance services, commuter rail services play a key role in daily transportation for residents of urban centres like Toronto and Montreal. GO Transit, operated by Metrolinx, is responsible for regional trains serving the Greater Toronto and Hamilton Area, while Exo operates commuter trains in the Montreal metropolitan area. These services provide essential links for suburban commuters travelling to and from major employment hubs.

One of the primary challenges facing passenger rail services in Ontario and Québec is that the vast majority of rail infrastructure used by VIA Rail is owned by freight rail companies and is largely shared with freight trains, which means that passenger trains are regularly required to yield to freight traffic. This leads to frequent delays and slower travel times, making passenger rail less attractive compared to other modes of transport, especially for travellers who prioritize frequency, speed and punctuality. The absence of dedicated tracks for passenger rail is a major obstacle in improving travel times and increasing the frequency of service. Without addressing this issue, it is difficult to envisage a significant modal shift towards passenger rail, with cars having greater flexibility, and planes offering faster travel speeds once airborne. Much of the rail network was constructed several decades ago, and despite periodic maintenance and upgrades, it is increasingly outdated in its inability to facilitate higher speeds.

Passenger rail has the potential for low emission intensity. However, some of the potential environmental benefits of rail services in Ontario and Québec have yet to be fully realized. Many existing VIA Rail trains operate on diesel fuel, contributing to greenhouse gas emissions and air pollution. The transition to electrified rail, which would significantly reduce emissions, has been slow, and there is currently no comprehensive plan for widespread electrification of existing VIA Rail passenger rail services in the region.

The current state of rail passenger services in Ontario and Québec – and the opportunities for improvement – have prompted the development of the Rapid Train project along the Toronto-Québec City corridor, which proposes to reduce travel times between major cities and provide a more competitive alternative to air and car travel. The project would also generate significant environmental benefits by reducing greenhouse gas emissions associated with road and air transport. Furthermore, by investing in enhanced rail services, journey times would be further cut, generating additional time savings and associated economic benefits.

Current Government Commitment to Enhanced Rail Services

The Rapid Train project plans to introduce approximately 1,000 kilometres of new, mostly electrified, and dedicated passenger rail tracks connecting the major city centres of Toronto, Ottawa, Montreal, and Québec City. As such, it would be one of the largest infrastructure projects in Canadian history. It is led by VIA-HFR, a Crown corporation that collaborates with several governmental organizations, including Public Services and Procurement Canada, Housing, Infrastructure and Communities Canada; Transport Canada and VIA Rail, all of which have distinct roles during the procurement phases. Subject to approval, a private firm or consortium is expected to be appointed to build and operate these new rail services, via a procurement exercise (see below).

This new rail infrastructure would improve the frequency, speed, and reliability of rail services, making it more convenient for Canadians to travel within the country’s most densely populated regions. The project has the potential to shift a significant portion of travel from cars (which currently account for 94 percent of trips in the Toronto-Québec City corridor) to rail (which represents just 2 percent of total trips).

The project also seeks to contribute to Canada’s climate goals by reducing greenhouse gas emissions. Electrified trains and the use of dual-powered technology (for segments of the route that may still require diesel) will significantly reduce the environmental footprint of intercity travel. The project is expected to improve the experience for VIA Rail users, as dedicated passenger tracks will reduce delays caused by freight traffic, offering passengers faster, more frequent departures, and shorter travel times.

Beyond environmental benefits, the project is expected to stimulate economic growth by creating new jobs in infrastructure development, supporting new economic centres, and enhancing connectivity between cities, major airports, and educational institutions.

The project is currently at the end of the procurement phase, following the issuance of a Request for Proposals (RFP) in October 2023. Through the procurement exercise, a private-sector partner will be selected to co-develop and execute the project. The design phase, which may last four or more years, will involve regulatory reviews, impact assessments, and the development of a final proposal to the government for approval. Once constructed, passenger operations are expected to commence by 2040.

The Rapid Train project also offers opportunities to improve services on existing freight-owned tracks. VIA Rail’s local services, which currently operate between these major cities, will benefit from integration with this project. Although final service levels are not yet determined, the introduction of a new dedicated passenger rail line is expected to enable VIA Rail to optimize operating frequencies and schedules, leading to more responsive and efficient service for passengers. In turn, this will mean that departure and arrival times can be adjusted to better suit travellers’ needs, reducing travel times and increasing the attractiveness of rail as a mode of transportation for both leisure and business. As many of VIA Rail’s existing passenger services switch onto dedicated tracks, there is potential to free up capacity on the existing freight networks. As such, freight rail traffic may benefit from reduced congestion, supporting broader economic growth by easing supply chains and by improving the efficiency of goods transportation across Canada.

The project design will enable faster travel compared to existing services, but as the co-development phase progresses, it will examine the possibility of achieving even higher speeds on certain segments of the dedicated tracks. Achieving higher speeds is not guaranteed, due to the extensive infrastructure changes required and its associated costs, e.g., full double-tracking and the closure of approximately 1,000 public and private crossings. However, the project design currently incorporates flexibility to explore higher speeds where there may be opportunities for operational and financial efficiencies and additional user benefits.

The current Rapid Train project proposal seeks to achieve wider social and government objectives. In the context of maintaining public ownership, private-sector development partners will be required to respect existing labor agreements. VIA Rail employees will retain their rights and protections, with continuity ensured under the Canada Labour Code and relevant contractual obligations.

International Precedent

High-Speed Rail (HSR) already exists in many countries, with notable examples of successful implementation in East Asia and Europe. As of the middle of 2024, China has developed the world’s largest HSR network spanning over 40,000 kilometres, followed by Spain (3,661 km), Japan (3,081 km), and France (2,735 km) (Statista 2024). Among the G7 nations, Canada stands as the only country without HSR infrastructure, albeit the United States maintains relatively limited high-speed operations through the Acela Express in the Northeast Corridor. Recent significant HSR developments include China’s Beijing-Shanghai line (2,760 km), which is the world’s longest HSR route. In Europe, the UK’s High Speed 1 (HS1) connects London to mainland Europe via the Channel Tunnel. Italy has extended its Alta Velocità network with the completion of the Naples-Bari route in 2023, significantly reducing travel times between major southern cities (RFI 2023). Morocco recently became the first African nation to implement HSR with its Al Boraq service between Tangier and Casablanca (OCF 2022). In Southeast Asia, Indonesia’s Jakarta-Bandung HSR, completed in 2023, is the region’s first HSR system (KCIC 2023). India is installing the Mumbai-Ahmedabad HSR corridor, the country’s first bullet train project, which is scheduled to commence partial operations by 2024 (NHSRCL 2023).

The economic impacts of HSR have been extensively studied, particularly in Europe. In Germany, Ahlfeldt and Feddersen (2017) analyzed the economic performance of regions along the high-speed rail line between Cologne and Frankfurt: the study found that, on average, six years after the opening of the line, the GDP of regions along the route was 8.5 percent higher than their estimated counterfactual. In France, Blanquart and Koning (2017) found that the TGV network catalyzed business agglomeration near station areas, with property values increasing by 15-25 percent within a 5km radius of HSR stations. An evaluation of the UK’s HS1 project estimated cumulative benefits of $23-$30 billion (2024 prices, present value, converted from GBP) over the lifetime of the project, excluding wider economic benefits (Atkins 2014).

Modal shift and passenger growth is a critical driver of economic benefits. The Madrid-Barcelona corridor in Spain provides an example: HSR captured over 60 percent of the combined air-rail market within three years of operation, demonstrating that HSR can have a competitive advantage over medium-distance air travel (Albalate and Bel 2012). However, analysis by the European Court of Arbiters (2018) suggests that HSR routes require certain volumes of passengers (estimated at nine million) to become net beneficial, and while some European HSR routes have achieved this level (including the Madrid-Barcelona route), others have not. In the US, the Amtrek Acela service between Boston and Washington D.C. is estimated to have 3-4 million passengers (Amtrek 2023). For some high-speed rail lines, passenger volumes are supported by government environment policy. For example, Air France was asked directly by the government to reduce the frequency of short haul flights for routes where a feasible rail option existed (Reiter et al. 2022). Overall, passenger growth constitutes a key assumption regarding the benefits derived from the Rapid Train project.

Regarding the environmental benefits of HSR, a detailed study by the European Environment Agency (2020) found that HSR generates approximately 14g of CO2 per passenger-kilometre, compared to 158g for air travel and 104g for private vehicles. In Japan, the Central Japan Railway Company reports that the Shinkansen HSR system consumes approximately one-sixth the energy per passenger-kilometre compared to air travel. The UIC’s Carbon Footprint Analysis (2019) demonstrated that HSR infrastructure, despite high initial carbon costs during construction, typically achieves carbon neutrality within 4-8 years of operation through reduced emissions from modal shift.

Socioeconomic benefits of HSR extend beyond direct impacts on rail users. In Spain, the Madrid-Barcelona high-speed rail line enhanced business interactions by allowing for more same-day return trips and improved business productivity (Garmendia et al. 2012). Research has found that Chinese cities connected by HSR experienced a 20 percent increase in cross-regional business collaboration, providing potential evidence of enhanced knowledge spillovers and innovation diffusion (Wang and Chen 2019).

However, the implementation of HSR is not without challenges. Flyvbjerg’s (2007) analysis of 258 transportation infrastructure projects found that rail projects consistently faced cost overruns averaging approximately 45 percent. For example, the costs of the California High-Speed Rail project in the United States rose from an initial estimate of $33 billion in 2008 to over $100 billion by 2022, highlighting the importance of realistic cost projections and robust project management.

Positive labor market impacts are also evident, although varied by region. Studies in Japan by Kojima et al. (2015) found that cities served by Shinkansen experienced a 25 percent increase in business service employment over a 10-year period after connection. European studies, particularly in France and Spain, show more modest but still positive employment effects, with employment growth rates 2-3 percent higher in connected cities compared to similar unconnected ones (Crescenzi et al. 2021).

For developing HSR networks, international experience suggests several critical success factors. These include careful corridor selection based on population density and economic activity, integration with existing transportation networks, and sustainable funding mechanisms. The European Union’s experience, documented by Vickerman (2018), emphasizes the importance of network effects in finding that the value of HSR increases significantly when it connects multiple major economic centres.

Methodology

This study integrates data from VIA-HFR, Statistics Canada, prior reports on rail infrastructure proposals in Canada, and related studies, to build an economic assessment of potential benefits of the proposed Rapid Train project. Key assumptions throughout this analysis are rooted in published transportation models, modelling guidelines, and an extensive body of research. The methodology draws extensively from the Business Case Manual Volume 2: Guidance by Metrolinx, which itself draws upon the internationally recognized transportation appraisal guidelines set by the UK government’s Department for Transport (DFT). These established guidelines offer best practices and standards that provide a structured and reliable framework for estimating benefits. By aligning with proven methodologies in transportation and infrastructure project appraisal, this study ensures rigor and robustness within the economic modelling and analysis.

The proposed route includes four major stations: Toronto, Ottawa, Montréal, and Québec City. These major urban centres are expected to experience the most significant ridership impacts and related benefits. There are three further stations on the proposed route – Trois-Rivières, Laval, and Peterborough – although these are anticipated to have a more limited effect on the overall modelling results, due to their smaller populations. Based on forecast ridership data provided by VIA-HFR for travel between the four main stations, our model designates these areas as four separate zones to facilitate the benefit estimation. Figure 1 below illustrates the proposed route for the Rapid Train project and highlights the different zones modeled in this analysis.

According to current VIA-HFR projections, the routes are expected to be operational between 2039 and 2042. In line with typical transport appraisals, this paper estimates and monetizes economic and social benefits of the project over a 60-year period, summing the cumulative benefits from 2039 through to 2098, inclusive. To calculate the total present value (as of 2024) of these benefits, annual benefits are discounted at a 3.5 percent social discount rate, in line with Metrolinx guidance, and then aggregated across all benefit years.

Our model examines multiple scenarios to assess the range of potential benefits under various conditions. The primary scenarios within the Rapid Train project are for Conventional Rail (CR) and High-Speed Rail (HSR). These scenarios are distinguished by differences in average travel time, with HSR benefiting from significantly faster speeds than CR, and therefore lower travel times (see Table 2).

Within each of these scenarios, we consider three sub-scenarios from VIA-HFR’s modelled passenger projections – central, downside and upside – plus a further sub-scenario (referred to as the 2011 feasibility study in the Figures) based on previous modelled estimates of a dedicated passenger rail line in the corridor. The central sub-scenario provides VIA-HFR’s core forecast for passenger growth under CR and HSR. The upside sub-scenario reflects VIA-HFR’s most optimistic assumptions about passenger demand, while the downside represents the organisation’s more cautious assumptions.

The use of VIA-HFR’s passenger projections is cross-checked in two ways: First, our analysis models an alternative passenger growth scenario (2011 feasibility study), which is based upon the projected growth rate for passenger trips as outlined in the Updated Feasibility Study of a High-Speed Rail Service in the Québec City – Windsor Corridor by Transport Canada (2011).2 The analysis in that study was undertaken by a consortium of external consultants. Second, we have reviewed passenger volumes in other jurisdictions (discussed above and below).

In the absence of investment in the Rapid Train project, VIA-HFR’s baseline scenario passenger demand projections indicate approximately 5.5 million trips annually by 2050 using existing VIA Rail services in the corridor. In contrast, with investment, annual projected demand for CR ranges from 8 to 15 million trips, and for HSR between 12 and 21 million trips by 2050, across all the sub-scenarios described above. Figures 2 and 3 illustrate these projected ridership figures under CR and HSR scenarios across each sub-scenario, as well as compared to the baseline scenario.

Under the CR and HSR scenarios, while the vast majority of rail users are expected to use the new dedicated passenger rail services, VIA-HFR passenger forecasts indicate that some rail users within the corridor will continue to use services on the existing VIA Rail line, for example, due to travelling between intermediate stations (Kingston-Ottawa). The chart below illustrates the breakdown of benefits under the central sub-scenario for high-speed rail.

User Benefits

User benefits in transportation projects such as CR/HSR can be broadly understood as the tangible and intangible advantages that rail passengers gain from improved services. These benefits encompass the value derived from time saved, enhanced reliability, reduced congestion, and improved overall travel experience. For public transit projects like CR/HSR, user benefits are often key factors in justifying the investment due to their broad social and economic impact.

Rail infrastructure projects can reduce the “generalized cost” of travel between areas, which directly benefits existing rail users, as well as newly induced riders. The concept of generalized cost in transportation economics refers to the total cost experienced by a traveller, considering not just monetary expenses (like ticket prices or fuel) but also non-monetary factors such as travel time, reliability, comfort, and accessibility.

Investments that improve transit may reduce generalized costs in several ways. Consistent, on-time service lowers the uncertainty, inconvenience and dissatisfaction associated with delays. More frequent services provide passengers with greater flexibility and reduced waiting times. Reduced crowding can offer more comfortable travel, reducing the disutility associated with congested services. Enhanced services like better seating, Wi-Fi, or improved station facilities may increase user satisfaction. Better access to transit stations or stops may allow for easier integration into daily commutes, increasing the convenience for existing and new travellers. Faster travel can reduce travel time, which is often valued highly by passengers.

In this paper, user benefits are estimated based on three core components: travel time savings based on faster planned journey times, enhanced reliability (lower average delays on top of the planned journey time), and the psychological benefit of more reliable travel. In our analysis, the pool of users is comprised of the existing users who are already VIA Rail passengers within this corridor, plus new users who are not prior rail passengers. Within this category of new users there are two sub-groups. First, new users include individuals who are forecast to switch to rail from other modes of transport, such as cars, buses, and airplanes – known as “switchers.” Second, new users also include individuals who are induced to begin to use CR/HSR as a result of the introduction of these new services – known as “induced” passengers. Overall, this approach captures the comprehensive user benefits of CR/HSR, recognizing that time efficiency, increased dependability, and greater customer satisfaction hold substantial value for both existing and new riders. The split of new users across switchers and induced users – including the split of induced users between existing transport modes, primarily road and air – is based on the federal government’s 2011 feasibility study, although the modelling in this Commentary also undertakes sensitivity analysis using VIA-HFR’s estimates for these proportions. The approach to estimating rail-user benefits is discussed below.

The modelling in this study incorporates projections of passenger numbers for both existing VIA Rail services (under a ‘no investment’ scenario) and for the proposed CR/HSR projects, sourced from VIA-HFR transport modelled forecasts. This enables the derivation of forecasts for both existing and new users.

In line with the formula (above) for user benefits, this study estimates the reduction in generalized costs (C1 – C0 ) arising from the new CR/HSR transportation service. Since the ticket price for the proposed CR/HSR is still undetermined, we have not assumed any changes versus current VIA Rail fares, although this is discussed as part of sensitivity analysis. The model reflects a reduction in generalized costs attributed to shorter travel times and enhanced service reliability under CR/HSR. Table 2 shows a comparison of the average scheduled journey times (as of 2023) for existing VIA Rail services, compared to forecast journey times under the proposed CR/HSR services, across different routes.

In addition to travel time savings based on scheduled journey times, an important feature of the CR/HSR project is that a new, dedicated passenger rail line can reduce the potential for delays. To estimate the reduction in travel delays under CR/HSR, we first calculated a lateness factor for both existing VIA Rail and the proposed CR/HSR, based on punctuality data and assumptions. Current data indicate that VIA Rail services are on time (reaching the destination within 15 minutes of the scheduled arrival time) for approximately 60 percent of journeys. Therefore, VIA Rail experiences delays (arriving more than 15 minutes later than scheduled) approximately 40 percent of the time. Data showing the average duration of delays are not available, and therefore we estimate that each delay is 30 minutes on average, based on research and discussions with stakeholders. CR/HSR would provide a dedicated passenger rail service, which would have a far lower lateness rate. Our model assumes CR/HSR would aim to achieve significantly improved on-time performance, with on-time arrivals (within 15 minutes) for 95 percent of journeys (Rungskunroch 2022), which equates to 5 percent (or fewer) of trains being delayed upon arrival.

Combined, there are time savings to users from both faster scheduled journeys and fewer delays. The estimated travel time savings are derived from the difference between the forecast travel times of CR/HSR and the average travel times currently experienced with VIA Rail. The value of time is monetized by applying a value of $21.45 per hour, calculated by adjusting the value of time recommended by Business Case Manual Volume 2: Guidance-Metrolinx ($18.79 per hour, in 2021 dollars) to 2024 dollars using the Consumer Price Index (CPI). This value remains constant (in real terms) over our modelling period.

There is an additional psychological cost of unreliability associated with delays. Transport appraisal guidelines and literature typically ascribe a multiplier to the value of time for unscheduled delays. The modelling in this study utilizes a multiplier of 3 for lateness, which is consistent with government transport appraisal guidance in the UK and Ireland (UK’s Department for Transport 2015, Ireland’s Department of Transport 2023). Some academic literature finds that multipliers may be even higher, although it varies according to the journey distance and purpose (Rossa et al. 2024). Overall, the lateness adjustment increases the value to rail users of CR/HSR due to its improved reliability and generates a small uplift to the total user benefits under CR/HSR.

The modelling combines these user benefits and makes a final adjustment to net off indirect taxes, ensuring that economic benefits are calculated on a like-for-like basis with costs incurred by VIA-HFR (Metrolinx 2021). Individual users’ value of time implicitly takes into account indirect taxes paid, whereas VIA-HFR’s investments are not subject to indirect taxation. Ontario’s rate of indirect taxation (13 percent harmonized sales tax rate) has been used in the modelling (Metrolinx 2021).

The modelling does not assume any variation in ticket prices under the proposed CR/HSR services, relative to existing VIA Rail services. User benefits in the analysis are derived purely from the shorter journey times and improved reliability. This approach enables the estimation, in the first instance, of the potential benefits from time savings and reliability. While CR/HSR ticket prices are not yet determined, it is nevertheless possible to consider the impact of changes to ticket prices as a secondary adjustment, which is discussed in the sensitivity analysis further below.

Congestion and Safety on the Road Network

In addition to rail-user benefits, the proposed CR/HSR project would also provide benefit to road users via decongestion and a potential reduction in traffic accidents.

When new travel options become available, such as improved rail services, some travellers shift from driving to using transit, reducing the number of vehicles on the road. This reduction in vehicle-kilometres travelled (VKT) decreases road congestion, providing benefits to the remaining road users. Decreased congestion leads to faster travel times, and can also lower vehicle operating costs, particularly in terms of fuel efficiency and vehicle wear-and-tear.

Our research model includes a forecast of how improvements in rail travel could lead to decongestion benefits for auto travellers in congested corridors. Through CR/HSR offering a faster and more reliable journey experience versus existing VIA Rail services, VIA-HFR’s passenger modelling forecasts shifts in travel patterns, with a significant proportion of new rail users being switchers from roads. These shifts reduce road congestion and in turn generate welfare benefits for those continuing to use highways.

Analysis of Canadian road use data, cross-checked with more granular traffic data from the UK, suggests that the proportion of existing road VKT is 37 percent in peak hours and 63 percent in off-peak hours, based on Metrolinx’s daily timetable of peak versus non-peak hours (Metrolinx 2021, Statistics Canada 2014, Department for Transport 2024). Using this information, the estimated weighted average impact of road congestion is approximately 0.004 hours/VKT. Time savings are converted into monetary values (using $21.45/hour, in 2024 dollars) to estimate the economic benefits of reduced road congestion.

In practice, road networks are unlikely to decongest by the precise number of transport users who are forecast to switch from road to rail. First, the counterfactual level of road congestion (without CR/HSR) will change over time, as a function of population growth, investment in road networks (such as through highway expansion), developments in air transport options, and wider factors. Many of these factors are not known precisely (e.g., investment decisions regarding highways expansion across the coming decades), therefore the counterfactual is necessarily subject to uncertainty. Second, if some road users switch to rail due to investment in CR/HSR, the initial (direct) reduction in congestion would reduce the cost of road travel, inducing a subsequent (indirect) “bounce-back” of road users (known as a general equilibrium effect). The modelling of congestion impacts in this study is necessarily a simplification, focusing on the direct impacts of decongestion, based on the forecast number of switchers from road to rail.

In addition to decongestion, CR/HSR may also improve the overall safety of the road network through fewer vehicle collisions. Collisions not only cause physical harm but also cause economic and social costs. These include the emotional toll on victims and families, lost productivity from injuries or fatalities, and the costs associated with treating accident-related injuries. Road accidents can cause disruptions that delay other travellers, adding additional economic costs, and can also incur greater public expenditure through emergency responses.

With CR/HSR expected to shift some users from road to rail, this study models the forecast reduction in overall road VKT. This estimate for the reduction in road VKT is converted into a monetary value assuming $ 0.09/VKT in 2024 prices, which is discounted in future years by 5.3 percent per annum to account for general safety improvements on the road network over time (such as through improvements in technology) and fewer accidents per year (Metrolinx 2018, Metrolinx 2021).

Agglomeration

Agglomeration economies are the economic benefits that arise when firms and individuals are located closer to one another. This generates productivity gains which are additional to direct user benefits. These gains can stem from improved labor market matching, knowledge spillovers, and supply chain linkages, benefiting groups of firms within specific industries (localization economies) as well as across multiple industries (urbanization economies). Where businesses cluster more closely – such as within dense, urbanized environments – these businesses benefit from proximity to larger markets, varied suppliers, and accessible public services. For instance, if a manufacturing firm relocates to an urban hub such as Montreal, productivity benefits may ripple across industries as the economic density and activity scale of the area increases. Agglomeration can enable longer-term economic benefits, through collaboration across businesses, universities, and research hubs, stimulating research and development, supporting innovation and enabling new industries to develop and grow.

Transport investments generate economic benefits and increase productivity through urbanization and localization economies. Urbanization economies (Jacobs 1969) refer to benefits arising from a business being situated in a large urban area with a robust population and employment base. This type of agglomeration allows firms to leverage broader markets and infrastructure advantages, thus achieving economies of scale that are independent of industry. Conversely, localization economies (Marshall 1920) focus on productivity gains within a specific industry, where firms in close proximity can cluster together to benefit from a specialized labor pool and more efficient supply chains. For example, as multiple manufacturing firms cluster within an area, their proximity allows them to co-create a specialized workforce and share industry knowledge, creating productivity gains unique to that industry.

In practice, improved transportation can generate agglomeration effects in two ways; first is “static clustering”, where improvements in connectivity facilitate greater movement between existing clusters of businesses and improved labor market access, without changing land use. For individuals and businesses in their existing locations, enhanced connectivity reduces the travel times and the costs of interactions, so people and businesses are effectively closer together and the affected areas have a higher effective density.

Second, “dynamic clustering” can occur when transport investments alter the location or actual density of economic activity. Dynamic clustering can lead to either increased or decreased density in certain areas, impacting the overall productivity levels across regions by altering labor and firm distributions. Conceptually, dynamic clustering’s benefits include the benefits from static clustering.

The analysis in this study is based on static clustering effects, focusing on productivity benefits arising from improved connectivity without modelling potential changes in land use or actual density. This approach estimates the direct economic gains of reduced travel times and enhanced accessibility within existing urban and industrial structures. Benefits arising from dynamic clustering are subject to greater uncertainty because it may involve displacement of economic activity between regions. In addition, variations in density across regions could be influenced by external factors – such as regional economic policies, housing availability, or industry-specific demands – that would require a much deeper and granular modelling exercise. Overall, focusing on static clustering provides a more conceptually conservative estimate of the benefits.

To estimate the agglomeration economies associated with the CR/HSR project, we utilize well-established transport appraisal methodology for agglomeration estimation (Metrolinx 2021). The analysis in this study applies one simplification to accommodate data availability, which is to undertake the analysis at an economy-wide level, rather than performing and aggregating a series of sector-specific analyses.

Overall, the three-step model estimates these agglomeration effects through changes in GDP. In the first step, the generalized journey cost (GJC) between each zone pair is calculated. This GJC serves as an average travel cost across various transportation modes (e.g., road, rail, air), taking account of journey times and ticket prices. The GJC is estimated for both the baseline (existing VIA Rail) and investment scenarios (CR/HSR), across multiple projection years. Due to the sensitivity of agglomeration calculations, in the baseline the GJC for CR/HSR, road and air are assumed to be equivalent, and in the investment scenario the GJC for road and rail are reduced by utilizing the rule of half principle (see Figure 5). The baseline utilizes Canada-wide vehicle kilometre data from Statistics Canada to estimate passenger modal shares (across existing VIA Rail, road, and air) for 2024, with the modal shares remaining constant over time in the baseline (Transport Canada 2021, Transport Canada 2018, Statistics Canada 2016). In the scenarios, the modal shares are adjusted for passengers moving from existing VIA Rail (and other transport modes) to CR/HSR, as well as induced passengers.

In the second step, the effective density of each of the four zones is calculated under all scenarios. Effective density increases in the investment scenarios because CR/HSR reduces the GJC and enhances connectivity between zones.

In the third step, changes in effective density between scenarios are converted into productivity gains measured as changes in GDP, utilizing a decay parameter of 1.8 and an agglomeration elasticity of 0.046 (Metrolinx 2021). The decay parameter (being greater than 1) diminishes the agglomeration benefits between regions that are further away from each other, such that the estimated productivity gains (arising from greater connectivity) are higher for areas that are closer together. The agglomeration elasticity is – based on academic literature – the assumed sensitivity of GDP to changes in agglomeration. Approximately, an elasticity of 0.046 assumes that a 1 percent increase in the calculated estimate for effective density (see step 2) would correspond to a 0.046 percent increase in GDP. Data on GDP and employment are sourced from Statistics Canada’s statistical tables, and forecast employment growth is assumed to align with Statistics Canada’s projected population growth rates.

Emissions

Environmental effects from transportation create a further source of economic impact. This study considers the main dimensions – greenhouse gas (GHG) emissions and air quality – each contributing to external welfare impacts that affect populations and ecosystems.

Transportation accounts for approximately 22 percent of Canada’s GHG emissions (Canada’s 2024 National Inventory Report), primarily through automobile, public transit, and freight operations. Emissions from GHGs, particularly carbon dioxide, significantly impact the global climate by contributing to phenomena such as rising sea levels, shifting precipitation patterns, and extreme weather events. The social cost of carbon (SCC) framework, published by Environment and Climate Change Canada, assigns a monetary value to these emissions, reflecting the global damage caused by an additional tonne of CO₂ released into the atmosphere. The federal government’s SCC values were published in 2023, more recently than the values recommended by Metrolinx’s 2021 guidance, and therefore the government’s values are used for the modelling in this study. For SCC, data from Environment and Climate Change Canada’s Greenhouse Gas Estimates Table are used, adjusted to 2024 values using CPI. Within the modelling, SCC values increase from $303.6 (in 2024) to $685.5 (in 2098). Using SCC in cost-benefit analyses enables more informed decisions on transportation investments by calculating the welfare costs and benefits associated with emissions under both investment and business-as-usual scenarios.

A wider set of pollutants emitted by vehicles – including CO, NOx, SO₂, VOCs, PM10s, and PM2.5s – pose further health risks, causing respiratory issues, heart disease, and even cancer. These harmful compounds, classified as Criteria Air Contaminants (CACs), impact individuals living or working in the vicinity of transport infrastructure, leading to external societal costs that are not fully perceived by direct users of the transport network. Health Canada’s Air Quality Benefits Assessment Tool (AQBAT) quantifies the health impacts of CACs, evaluating the total economic burden of poor air quality through a combination of local pollution data and Concentration Response Functions (CRFs), linking pollutants to adverse health effects. Furthermore, AQBAT considers air pollution’s effects on agriculture and visibility, allowing analysts to estimate the overall benefits of reducing transport-related emissions for communities across Canada.

This study identifies that CR/HSR has the potential to reduce emissions across multiple fronts. First, as an electrified rail system, CR/HSR is capable of operating with zero emissions, providing a cleaner alternative to existing rail services. If VIA Rail discontinues some services on overlapping routes with CR/HSR, emissions from rail transport in those areas would decrease, as per its planning forecasts. Additionally, CR/HSR’s higher speeds and greater reliability are expected to attract more passengers over time, encouraging a modal shift from more carbon-intensive forms of transportation, such as cars and airplanes. This anticipated shift would lead to a reduction in overall emissions from private vehicle and regional air travel, contributing to CR’s/HSR’s positive environmental impact.

By incorporating SCC and AQBAT metrics, the analysis offers a holistic appraisal of the environmental and social benefits of reducing emissions and improving air quality through CR/HSR, capturing the external welfare consequences beyond direct user impacts. Unit costs of CACs (see Table 3 below) are sourced from Metrolinx (2021) and are also adjusted by CPI into 2024 prices.

Results and Analysis

This section sets out the potential benefits of CR/HSR across various scenarios and sub-scenarios, spanning the 60-year period project implementation (2039 to 2098, inclusive). Results are reported in 2024 present value terms, cumulated over the 60-year period, as per cost-benefit analysis (CBA) literature (e.g., Metrolinx 2021). This cumulative present value represents the total value of benefits to 2098, with benefits in future years discounted to 2024 values. Figure 6 below illustrates the total cumulative present value of benefits for the proposed CR/HSR project, under different scenarios and passenger growth sub-scenarios in our model.

Since the HSR upside is the most optimistic sub-scenario, with a higher speed and the highest projected growth rate for rail passengers, it yields the largest total economic benefit, estimated at approximately $27 billion. Conversely, the CR downside assumes a comparatively lower speed and a smaller growth rate for rail passengers, resulting in the lowest benefit among all sub-scenarios, estimated at around $11 billion. This range of outcomes highlights that economic benefits are sensitive to assumptions around speed and passenger growth, underscoring the importance of these factors in the overall project evaluation.

Figure 7 illustrates the breakdown of benefits from the proposed CR/HSR project across different sub-scenarios and categories of benefits (see Table 4 in the Appendix for numerical values). User benefits form the largest component, indicating that rail passengers are expected to gain approximately $3.1–$9.2 billion in value over the modelling period, in present-day terms. Road decongestion effects, agglomeration impacts and emissions reductions are also forecast to deliver economic benefits. This study’s modelling estimates that CR/HSR could generate agglomeration effects that boost GDP by around $2.6–$3.9 billion over the 60-year analysis period, through enhancing productivity in the Ontario-Québec corridor. CR/HSR could significantly reduce greenhouse gas emissions and improve air quality, valued at approximately $2.6–$7.1 billion when considering the social cost of carbon and other pollutants. Benefits from reduced congestion on roads are estimated at $2.0–$5.9 billion. Finally, improved road safety offers an additional $0.3–$0.8 billion (approximately) in present value. Together, these impacts illustrate the wide-ranging economic, environmental, and social benefits anticipated from the CR/HSR project.

Given the potential sensitivity of economic benefits to assumptions around passenger growth, the 2011 federal government feasibility study provides a useful point of comparison for rail passenger growth under CR/HSR. The current outlook for rail passenger forecasts is not the same as it was in 2011, but some of the changes will have offsetting impacts. On one hand, Canada’s population has both grown faster (between 2011 and 2024) and is expected to grow faster in the future, relative to expectations in 2011. On the other hand, remote working has increased significantly since the COVID-19 pandemic. Passenger forecasts are discussed in more detail below.

Modelled agglomeration benefits are at the upper end of expectations. For example, the value of agglomeration effects for the HSR central scenario in this study ($3.4 billion) is almost 50 percent of the value of rail user benefits ($7.2 billion). Within academic literature, economic benefits from agglomeration are typically estimated to be in the region of 20 percent of direct user benefits on average (Graham 2018). However, across a range of studies, agglomeration benefits up to 56 percent have been identified (Oxera 2018). Therefore, the modelled estimates appear high relative to prior expectations, but within a plausible range.

To note, our agglomeration modelling (based on the Metrolinx methodology) forecasts significant economic benefits for all four of the zones. Our modelled agglomeration estimates for each zone are a function of the distance between zones (higher distance reduces agglomeration benefits due to the decay parameter), forecast uptake of CR/HSR services, and GDP. For example, Toronto’s agglomeration effect (as a percentage of GDP) is forecast to be one-third less than that of Montreal, due to be Toronto being slightly further away (from Ottawa, Montreal and Quebec City) than those cities are to each other. The agglomeration modelling is complex and sensitive to input assumptions, therefore it is important to recognize a degree of uncertainty around the precise value of agglomeration-related economic benefits.

Sensitivity Analysis

Ticket prices for CR/HSR impact the total benefits. For example, under the HSR central scenario, if HSR ticket prices were set 20 percent above existing Via Rail ticket prices, the forecast present value of user benefits falls by around 40 percent. The present value of economic benefits would fall by $4.2 billion compared to the HSR central case (from $20.7 billion to $16.5 billion), the majority being due to lower user benefits. However, recognizing cost of living concerns for Canadian households, it is also possible that median ticket prices could fall – such as through dynamic pricing – in which case economic benefits could also rise, by a similar amount.

The source of CR/HSR passengers will impact the estimated quantum of benefits, although relatively moderately. If proportions for “switchers” and “induced” passengers are sourced from VIA-HFR’s estimates, the level of economic benefits is $3.0 billion lower (falling from $20.7 billion to $17.7 billion). VIA-HFR’s forecasts assume a higher proportion of induced passengers, and also assume a greater share of switchers from air transport. As a result, the main impact of the VIA-HFR assumptions is to produce a smaller road decongestion effect, which reduces the potential benefits for road users.

The agglomeration calculation is relatively sensitive to the baseline assumption for passenger modal share. The modelling in this study is based on Canada-wide vehicle kilometre data, utilizing information from Transport Canada and Statistics Canada. Further analysis could be undertaken to refine this assumption across Ontario and Québec, while also ensuring that forecast agglomeration benefits align with wider estimates in existing transport literature.

Discussion and Qualifications

The analysis presented in this study is based on currently available information and projections, which are subject to certain limitations. Notably, there are uncertainties surrounding several key factors, including the precise routes and station locations, the design specifications (e.g., maximum achievable speed), ticket pricing, expected passenger numbers, the breakdown across ”switchers” and “induced” passengers, and passenger modal shares more generally. These elements, if altered, could impact the economic outcomes considerably.

There are several important qualifications to the scope of this study. First, it provides an analysis of potential economic benefits from CR/HSR investment but does not seek to quantify or analyze the direct costs involved in procurement, financing, construction, operations, maintenance or renewals. As such, this study constitutes an analysis of economic benefits, rather than a full cost-benefit analysis exercise. Second, this study seeks to estimate national, aggregate-level impacts, rather than undertaking a full distributional analysis of the impacts across and between different population groups. Third, this study’s primary focus is an economic assessment, rather than a transportation modelling exercise. The economic analysis utilizes and relies upon detailed, bottom-up passenger forecasts developed by VIA-HFR (received directly), cross-checked against the 2011 federal government’s previous HSR study. All three of these scope issues are important inputs to a holistic transport investment appraisal and should be considered in detail as part of investment decision-making.

Specifically, regarding this final issue – passenger forecasts – it is relevant to consider the transport modelling assumptions in further detail. As noted above, this study has not developed a full transport model, nor does it seek to take a definitive view on VIA-HFR’s forecasts. We would recommend that independent technical forecasts are developed. However, there are several relevant observations.

On one hand, VIA-HFR’s estimates do not appear implausible. For example, HSR has achieved a 7-8 percent share of passenger travel within certain routes in the United States (New York-to-Boston and New York-to-Washington), which would appear to be broadly consistent with the level of ambition within VIA-HFR’s passenger growth forecasts for the HSR central scenario (LEK 2019). The Madrid-Barcelona high speed link is estimated to serve 14 million passengers per year (International Railway Journal 2024). Internationally, HSR has achieved high market shares in Europe and Asia, such as 36 percent modal share for Madrid-Barcelona and 37 percent for London-Manchester, albeit noting that Europe typically has lower road usage and a higher propensity to use public transport (LEK 2019).

On the other hand, it is important to recognize the historic tendency for optimism bias within transportation investment projects. For example, in the UK, the HS2 project was criticized as having “overstated the forecast demand for passengers using HS2 [and] overstated the financial benefits that arise from that demand” (Written evidence to the Economic Affairs Committee, UK 2014). A review of HS2 in 2020 revised downwards previous estimates of economic benefits (Lord Berkeley Review 2020). As noted further above, analysis by the European Court of Arbiters (2018) posits that not all HSR projects induce sufficient passenger volumes to achieve net benefits over the project lifetime.

Overall, future passenger forecasts will depend upon a range of factors, including ticket prices, the availability and price of substitute modes (i.e., air), cultural preferences for private vehicle ownership, the impact of changing emission standards and the feasibility of construction plans.

This study applies some pragmatic, simplifying assumptions and approximations, applied to best practice transport appraisal (Metrolinx 2021; Department for Transport, UK, 2024). Across these modelling assumptions, there is variation in the directional impact on our estimates for economic benefits.

On one hand, some of the modelled benefits are likely to be relatively high-end estimates. First, for rail-user benefits, the modelling assumes no differential in ticket prices between existing VIA Rail services and CR/HSR. It also assumes that CR/HSR can deliver VIA-HFR’s proposed journey times with 95 percent reliability, which is achievable but not guaranteed. Second, for road congestion benefits, the forecast (direct) reductions in road congestion assume no indirect “bounce-back” effect where reduced traffic encourages new or longer trips (as noted above). For example, analysis of US highway demand suggests that capacity expansion only results in temporary congestion relief, for up to five years, before congestion returns to pre-existing levels (Hymel 2019). Third, for agglomeration, the modelled estimates for economic benefits are approximately 50 percent of rail-user benefits, which is close to the upper end of estimates from other transportation studies. Fourth, for emissions, the estimated benefits from forecast emissions savings do not seek to make assumptions about future changes to fuel efficiency for road and air transport, the emissions associated with power generation for CR/HSR, or the anticipated growth in electric vehicle adoption. In the case of electric vehicle deployment, there is uncertainty regarding the level of uptake, as well as the carbon intensity of electricity generation (albeit Ontario and Québec have relatively “clean” grids by international standards). Fifth, for benefits overall, this study leverages the VIA-HFR forecasts for passenger growth which are likely to be ambitious, though they have been robustly developed.

On the other hand, by focusing on the most material economic benefits, this study may exclude some smaller additional benefits that could be considered in further detail. First, there may be specific impacts on the tourism and hospitality sector. By enhancing travel convenience, CR/HSR is likely to draw more visitors to the various cultural, entertainment, and natural attractions across the corridor. As this influx would benefit local businesses by stimulating economic growth and job creation, these impacts are likely to be reflected within the estimate of agglomeration benefits.

Second, CR/HSR would improve national and global competitiveness, enhancing the appeal of Canadian cities to investors and environmentally conscious travellers while helping Canada align more closely with global standards for sustainable, modern infrastructure. Again, the economic benefits are likely to align with the agglomeration estimates.

Third, this study does not seek to quantify the potential gains to individual productivity from CR/HSR ridership, e.g., from individuals having time to work on the train. There is not expected to be a benefit for existing rail users, as they can already utilize Wi-Fi on existing VIA Rail services. For individuals switching to rail from road or air, potential benefits would only accrue to business users. Although switchers from road and air could have opportunities for improved individual productivity, Wi-Fi is increasingly available on airlines and individuals are able to dial into meetings remotely whilst driving.

Fourth, CR/HSR could generate wider economic benefits by increasing competition between businesses along the corridor. International transport appraisal literature suggests that enhanced transport connectivity can erode price markups (and therefore increase consumer surplus) by overcoming market imperfections (Metrolinx 2021; Department for Transport 2024). However, such impacts are likely to be relatively small, e.g., the Department for Transport (UK) estimates them at 10 percent of the benefits for rail business users only. Furthermore, sources of market power in Canada are legal in nature (e.g., interprovincial trade barriers) which rail investment alone is unlikely to overcome.

There are a further group of issues that have been excluded consciously from the methodology in this study. First, impacts on rail crowding are not considered. Some transport appraisals (such as the UK’s economic appraisal of the High Speed 2 project) do estimate the user benefits from reduced crowding. However, this is not as applicable for CR/HSR: In the UK, users of existing rail services may be required to stand if the train is overbooked, whereas users of existing VIA Rail services are guaranteed a seat with their booking. Second, impacts on land and property values are not included within the economic benefits. With greater access to efficient transportation, properties near rail stations typically see increased demand and value, boosting local tax revenues and promoting urban revitalization. While CR/HSR could increase values in areas close to the proposed stations, such changes are not additional to other wider economic benefits, but rather reflect a capitalization of those benefits. To avoid the risk of double counting the economic benefits already estimated, these are excluded (Department for Transport 2024).

CR/HSR may improve social equity and accessibility by offering affordable, reliable travel options for those without cars, including low-income individuals, students, and seniors. This expanded access enables broader employment, education, and healthcare opportunities, contributing to a more inclusive society. Whilst this study does not include a distribution analysis, social benefits from greater inclusion and social equity would constitute a benefit of CR/HSR investment and merit further detailed analysis.

Finally, in addition to policy considerations, major investment decisions have a substantial political dimension. For example, Canada is the only G7 country without HSR infrastructure. While cognizant of the political context, the analysis in this study is purely an economic assessment and does not consider political factors.

Conclusion

Canada’s population and economy continue to expand, particularly within the Toronto-Québec corridor. Existing transportation routes can expect greater congestion over time, particularly capacity-constrained VIA Rail services. In this context, can Canada afford not to progress with faster, more frequent rail services? There are significant opportunity costs to postponing investment.

This study has developed quantified estimates of the economic benefits of investing in the proposed Rapid Train project in the Toronto-Québec City corridor. Cumulatively, in present value terms, these economic benefits are estimated to be $11-$17 billion under our modelled conventional rail (CR) scenarios, and larger – at $15-$27 billion – under high-speed rail (HSR) scenarios. Economic benefits arise from several areas, including rail user time savings and improved reliability, reduced congestion on the road network, productivity gains through enhanced connectivity, and environmental benefits through emission reductions. With many commentators highlighting that Canada is experiencing a “productivity crisis” and a “climate emergency,” the projected productivity gains and lower-emission transportation capacity from the Rapid Train project present particularly valuable opportunities.

This study has assessed major economic benefit categories as identified within mainstream transport appraisal guidance. Further research could include additional sensitivity analysis around key parameters, as well as consideration of potential dynamic clustering effects, and projections for housing and land values.

Clearly, there is a cost to investment in a new dedicated passenger rail service: upfront capital investment, ongoing operations and maintenance expenditure, and any financing costs. These costs are not assessed in this study and will need to be considered carefully by policymakers. However, inaction – by continuing with the status quo rail infrastructure – also has a significant opportunity cost. Canada would forgo billions of dollars worth of economic advantage if it fails to deal with current challenges, including congestion on the rail and road networks, stifled productivity, and environmental concerns.

This study identifies the multi-billion-dollar economic benefits from the proposed Rapid Train project. While these benefits will need to be weighed alongside the forecast project costs, this study provides a basis for subsequent project evaluation and highlights the significant opportunity costs that Canada is incurring in the absence of investment.

Appendix

For the Silo, Tasnim Fariha, David Jones. The authors thank Daniel Schwanen, Ben Dachis, Glen Hodgson and anonymous reviewers for comments on an earlier draft. The authors retain responsibility for any errors and the views expressed.

References

Ahlfeldt, G., Feddersen, A., 2017. “From periphery to core: measuring agglomeration effects using high-speed rail.” Journal of Economic Geography.

Albalate, D., and Bel, G. 2012. “High‐Speed Rail: Lessons for Policy Makers from Experiences Abroad.” Public Administration Review 72(3): 336-349.

Amtrak. 2023. Amtrak fact sheet: Acela service.

Atkins, AECOM and Frontier Economics. 2014. First Interim Evaluation of the Impacts of High Speed 1, Final Report, Volume 1. Prepared for the Department of Transport, UK.

Blanquart, C., and Koning, M. 2017. “The local economic impacts of high-speed railways: theories and facts.” European Transport Research Review 9(2): 12-25.

Bonnafous, A. 1987. “The Regional Impact of the TGV.” Transportation 14(2): 127-137.

California High-Speed Rail Authority. 2022. “2022 Business Plan.” Sacramento: State of California.

Central Japan Railway Company. 2020. “Annual Environmental Report 2020.” Tokyo: JR Central.

Crescenzi, R., Di Cataldo, M., and Rodríguez‐Pose, A. 2021. “High‐speed rail and regional development.” Journal of Regional Science 61(2): 365-395.

Dachis, B., 2013. Cars, Congestion and Costs: A New Approach to Evaluating Government Infrastructure Investment. Commentary. Toronto: C.D. Howe Institute. July.

Dachis, B., 2015. Tackling Traffic: The Economic Cost of Congestion in Metro Vancouver. Commentary. Toronto: C.D. Howe Institute. March.

Department of Transport (Ireland). 2023. “Transport Appraisal Framework, Appraisal Guidelines for Capital Investments in Transport, Module 8 – Detailed Guidance on Appraisal Parameters.”

Department for Transport (UK). 2024. “National Road Traffic Survey, TRA0308: tra0308-traffic-distribution-by-time-of-day-and-selected-vehicle-type.ods (live.com).”

______________. 2024. “Road traffic estimates (TRA).”

______________. 2015. “Understanding and Valuing Impacts of Transport Investment.”

______________. 2024. Transport analysis guidance (various).

Economic Affairs Committee, UK government. 2014. Written evidence (Alan Andrews), “EHS0071 – Evidence on The Economic Case for HS2.”

European Court of Arbiters. 2018. “Special Report: A European high-speed rail network: not a reality but an ineffective patchwork.”

European Environment Agency. 2020. “Transport and Environment Report 2020: Train or Plane?” EEA Report No 19/2020.

Flyvbjerg, B. 2007. “Cost Overruns and Demand Shortfalls in Urban Rail and Other Infrastructure.” Transportation Planning and Technology 30(1): 9-30.

Garmendia, M., Ribalaygua, C., and Ureña, J. M. 2012. “High speed rail: Implication for cities.” Cities 29(S2), S26-S31.

Graham, D., 2018: “Quantifying wider economic benefits within transport appraisal.”

Government of Canada, House of Commons. 2019. Vote No. 1366. 42nd Parliament, 1st Session.

Government of Canada. 2023. “Social Cost of Greenhouse Gas Estimates – Interim Updated Guidance for the Government of Canada.”

High Speed Rail Authority (HS2 Ltd). 2024. “HS2 Phase One: London to Birmingham Development Report.”

Hymel, K. 2019. “If you build it, they will drive: Measuring induced demand for vehicle travel in urban areas.” Transport Policy Volume 76.

Indonesian-Chinese High-Speed Rail Consortium (KCIC). 2023. “Jakarta-Bandung High-Speed Railway Project Completion Report.”

International Railway Journal. 2024. “Spanish high-speed traffic up 37 percent in 2023.”

International Transport Forum-OECD. 2013. “High Speed Rail Performance in France: From Appraisal Methodologies to Ex-post Evaluation.”

International Union of Railways (UIC). 2022. “High-Speed Rail: World Implementation Report.” Paris: UIC Publications.

International Union of Railways (UIC). 2019. “Carbon Footprint of Railway Infrastructure.” Paris: UIC Publications.

Jacobs, J. 1969. The Economy of Cities. New York: Random House.

Kojima, Y., Matsunaga, T., and Yamaguchi, S. 2015. “Impact of High-Speed Rail on Regional Economic Productivity: Evidence from Japan.” Research Institute of Economy, Trade and Industry (RIETI) Discussion Paper Series 15-E-089.

Lawrence, M., Bullock, R. G., and Liu, Z. 2019. “China’s High-Speed Rail Development.” World Bank Publications.

LEK. 2019. New Routes to Profitability in High-Speed Rail.

Lord Berkeley Review. 2020. A Review of High Speed 2, Dissenting Report by Lord Tony Berkeley, House of Lords: Lord-Berkeley-HS2-Review-FINAL.pdf.

Marshall, A. 1920. Principles of Economics. London: Macmillan.

Metrolinx. 2018, GO Expansion Full Business Case.

________. 2021. Business Case Manual Volume 2: Guidance.

________. 2021, Traffic Impact Analysis Durham-Scarborough Bus Rapid Transit.

Morgan, M., Wadud, Z., Cairns, S. 2025, “Can rail reduce British aviation emissions?” Transportation Research Part D 138.

National High Speed Rail Corporation Limited (NHSRCL). 2023. “Mumbai-Ahmedabad High Speed Rail Project Status Report.”

Office National des Chemins de Fer (ONCF). 2022. “Al Boraq High-Speed Rail Service: Five Year Performance Review.”

OAS. 2019. “High Speed Rail vs Air: Eurostar at 25, The Story So Far.”

Oxera. 2018. “Deep impact: assessing wider economic impacts in transport appraisal.”

Reiter, V., Voltes-Dorta, A., Suau-Sanchez, P. 2022, “The substitution of short-haul flights with rail services in German air travel markets: A quantitative analysis.” Case Studies on Transport Policy.

Rete Ferroviaria Italiana (RFI). 2023. “Alta Velocità Network Expansion: Naples-Bari Route Completion Report.”

Rossa et al. 2024. “The valuation of delays in passenger rail using journey satisfaction data.” Elsevier, Part D (129).

Rungskunroch, P. 2022. “Benchmarking Operation Readiness of the High-Speed Rail (HSR) Network.”

Statistics Canada. 2023. Table 36-10-0468-01 Gross domestic product (GDP) at basic prices, by census metropolitan area (CMA) (x 1,000,000).

______________. 2024. Table 14-10-0420-01 Employment by occupation, economic regions, annual.

______________. 2024. Table 17-10-0057-01 Projected population, by projection scenario, age and gender, as of July 1 (x 1,000).

_____________. 2016. Table 8-1: Domestic Passenger Travel by Mode, Canada.

______________. 2014, Canadian vehicle survey: Canadian vehicle survey, passenger-kilometres, by type of vehicle, type of day and time of day, quarterly (statcan.gc.ca).

Transport Canada. 2021. Transportation in Canada 2020, Overview Report, Green Transportation.

_______________. 2018. RA16-Passenger and Passenger-Kms for VIA Rail Canada and Other Carriers.

Ministry of Transportation of Ontario & Transport Canada. 2011. “Updated feasibility study of a high-speed rail service in the Québec City – Windsor Corridor: Deliverable No. 13 – Final report.”

VIA-HFR website. 2024. Frequently Asked Questions.

Vickerman, R. 2018. “Can high-speed rail have a transformative effect on the economy?” Transport Policy 62: 31-37.

Wang, X., and Chen, X. 2019. “High-speed rail networks, economic integration and regional specialisation in China.” Journal of Transport Geography 74: 223-235.