From electro-chromatic e-windows to using supernova explosions to explore the earth for mineral deposits: World Economic Forum 2025 Technology Pioneers Leading New Wave of Global Innovation
The World Economic Forum selects 100 start-ups from 28 countries to join its Technology Pioneers community.
The new cohort marks a global surge of emerging technologies, from smart robotics and spatial AI to flying taxis and scalable quantum solutions.
Now in its 25th year, the community has recognized over 1,200 start-ups that have gone on to transform industries and societies worldwide.
For more information on the Annual Meeting of the New Champions 2025, visit wef.ch/amnc25 and share on social media using the hashtag #amnc25, or #2025夏季达沃斯#. Read more about the 2025 Technology Pioneers here.
Geneva, Switzerland, 2025 – The World Economic Forum 2025 Technology Pioneers community is a group of 100 early-stage companies from 28 countries driving innovation across industries and borders. Now in its 25th year, the program celebrates its strongest cohort yet, marked by broader geographical representation, greater diversity beyond Silicon Valley and the rise of more ambitious frontier technologies.
Reflecting wider shifts in the innovation landscape, many of the companies spotlighted are using artificial intelligence (AI) to reach greater scale and sophistication with fewer resources. Several are venturing into less explored frontiers – from asteroid mining and flying electric taxis, to leveraging satellite imagery to transform agriculture and harnessing energy from supernova explosions to locate critical minerals beneath the Earth’s surface.
The geography of innovation is also evolving.
While the United States remains the top contributor to the community, Europe’s share has surged to 28% – up from 20% last year – reflecting the rise of strong tech ecosystems across the region. China and India are also emerging as major tech innovation hubs.
“There has never been a more exciting time to dive headfirst into tech innovation. But no one gets far alone – you need a community to move your mission forward,” said Verena Kuhn, Head of Innovator Communities, World Economic Forum. “As we mark 25 years of the Technology Pioneers programme, this global community continues to connect start-ups to the networks and ecosystems they need to scale.”
This year also marks the 25th Anniversary of the Technology Pioneers programme. Since its inception in 2000, the community has championed early-stage innovation and recognized more than 1,200 companies, many of which have gone on to reshape industries worldwide. Alumni include household names such as Google, PayPal, Dropbox and SoundCloud, underscoring the community’s role as a launchpad for ideas and impact.
The 2025 cohort stands out for its concentration of companies developing breakthrough technologies to address pressing global challenges. These include advanced robotics, customisable space launch services, micro nuclear reactors and more accessible quantum computing applications. These pioneers will contribute cutting-edge insights to Forum initiatives over a two-year engagement program and will also be invited to participate in the Annual Meeting of the New Champions 2025, taking place on 24-26 June in Tianjin, People’s Republic of China.
The 2025 Technology Pioneers include:
Australia • Cauldron – Commercializing advanced continuous fermentation technology to unlock price parity for mainstream bio-manufactured goods.
Brazil • Brain4care – Using AI-based technology to enable timely medical interventions for patients with neurological conditions.
Canada
• Ideon Technologies – Harnessing the energy from supernova explosions in space to image deep beneath the Earth’s surface, transforming how mining companies recover critical minerals. • Miru – Developing dynamic electrochromic windows that deliver high functionality, experience and energy efficiency for the automotive, transportation and architectural sectors.
Greater China • Deep Principle – Integrating advanced AI models and quantum chemistry to accelerate the discovery and development of chemical materials. • GS Biomats – Developing furan bio-based material, a renewable alternative to petroleum-based chemicals, for various uses including biomedical applications. • HiNa Battery – Producing more sustainable, high-performance, low-cost sodium-ion batteries. • KaiOS – Providing affordable internet and access to financial services to unserved populations, primarily in South Asia and Africa. • Lightstandard – Making large language model computing faster and more energy-efficient with photonic computing. • Noematrix – Focusing on researching and developing embodied intelligence systems and related tools and platforms, which are compatible with diverse hardware. • Novlead – Designing a molecular technology platform providing available, accessible and affordable nitric oxide solutions for major clinical needs. • Shengshu Technology – Building generative AI infrastructure that develops native multi-modal large models such as images, 3D and video. • TRANSTREAMS – Engineering chips and solutions to address the computing power shortages in China during the era of AI-generated content. • Turing – Providing cutting-edge computing infrastructure and comprehensive AI solutions to drive the future of intelligent computing.
Colombia • Plurall – Supporting early-stage entrepreneurs in emerging markets with fast, accessible working capital and digital payment solutions, leveraging AI models for risk assessment, collections and embedded lending.
Denmark • Arcadia eFuels – Developing and deploying technology to produce electro-sustainable aviation and diesel fuels using renewable electricity, seawater, and captured CO2.
Egypt • Thndr – Offering a digital investment platform with a range of flexible funding methods and educational resources to empower investors.
France • Ascendance Flight Technologies – Decarbonizing aviation with a hybrid electric propulsion system and hybrid vertical take-off and landing (VTOL) aircraft. • Beyond Aero – Building the first electric business aircraft powered by hydrogen propulsion, as a sustainable alternative to traditional business jets. • CO2 AI – Helping large and complex organizations measure their environmental impact, identify credible levers and decarbonize at scale through AI. • Jimmy – Developing a micro nuclear reactor to provide carbon-free, competitive heat for industrial processes. • Nabla – Reducing clinician burnout by automating clinical documentation with AI. • Orakl Oncology – Creating a biology and AI-powered simulation platform to revolutionize oncology drug development. • Phagos – Deploying a sustainable alternative to antibiotics using bacteriophages and AI • Quobly – Making scalable, cost-competitive, large-scale quantum computers. • Sweetch Energy – Enabling osmotic power generation by harnessing the salinity gradient between freshwater and seawater.
Germany • Accure – Providing predictive battery analytics software to enhance safety, optimize performance and extend the lifetime of battery systems. • Black Forest Labs – Building generative deep learning models for media, particularly images and videos • eleQtron – Developing quantum computers by leveraging trapped-ion technology. • Tozero – Pioneering the delivery of recycled lithium in Europe by sustainably recovering critical materials from battery waste.
India • Agnikul – Providing affordable and customizable space launch services. • CynLr – Building robots with intuitive vision and enabling manufacturers and logistics providers to build fully automated factories. • Dezy – Leveraging AI-powered diagnostic technology to build affordable and accessible dental care. • Digantara – Providing crucial operational support to commercial space operators and space surveillance intelligence to global space agencies. • Equal – Providing an integrated solution that combines identity verification with consent-driven financial data sharing. • Exponent Energy – Making 15-minute rapid charging for electric vehicles affordable and scalable through an innovative battery management system, charging algorithms, thermal management and a charging network. • Freight Tiger – Building India’s largest software-enabled freight network to help businesses move goods with full visibility, efficiency and lower costs. • GalaxEye – Creating a comprehensive, multi-sensor Earth observation system. • SolarSquare – Helping homes switch to solar in India with its full-stack solar panel systems. • The ePlane Co. – Developing flying electric taxis designed for intra-city transportation.
Israel • Fermata – Providing computer vision solutions for farmers to reduce crop losses and pesticide use. • Illumex – Empowering organizations to run governed and reliable AI agents through unified business data language and to democratize data access to every user. • LightSolver – Building a photonic supercomputer by harnessing the power of coupled lasers. • NanoSynex – Offering a rapid and accurate diagnostic platform for bacterial resistance. • ZutaCore – Developing waterless direct-to-chip liquid cooling for AI and high-performance computing (HPC) data centres.
Italy • Arsenale Bioyards – Building new lab-to-production infrastructure enabling fast, low-cost biomanufacturing at an industrial scale.
Japan • Sagri – Leveraging satellite data and AI to transform agriculture through land use optimization and sustainability.
Republic of Korea • Hylium Industries – Providing safe and innovative liquid hydrogen solutions for carbon-free mobility. • NARA Space – Building South Korea’s first microsatellite constellation for methane point source detection. • Robocon – Developing robotics and smart factory solutions for the construction and steel industries.
Luxembourg • Tokeny Solutions – Building the compliance infrastructure for digital assets in blockchain and fintech.
Mexico • Allie – Creating closed-loop optimization systems for manufacturing that autonomously adjust production parameters in real time.
Nigeria • Cybervergent – Providing a platform to automate cybersecurity compliance and risk governance. • Sabi – Powering the sourcing and distribution of physical goods and critical commodities in Africa. • ThriveAgric – Empowering smallholder farmers across Africa by linking them to finance, data-driven best practices, and access to local and global markets.
Saudi Arabia • Intelmatix – Making enterprise AI accessible through industry-specific, context-aware AI agents.
Singapore • Manus – Automating a wide range of practical tasks for personal and professional use with a general AI agent. • Rize – Decarbonizing rice cultivation in Asia through scalable agricultural innovations.
Spain • Crisalion Mobility – Offering sustainable air and ground mobility solutions. • INBRAIN Neuroelectronics – Developing brain-computer interfaces to treat neurological disorders.
Sweden • Graphmatech – Developing advanced materials infused with graphene to make large-scale industries more innovative and resource efficient. • Lovable – Using AI to help users create software and web apps without coding expertise.
Switzerland • HAYA Therapeutics – Developing RNA-based medicines to treat heart, lung and tissue diseases. • Neural Concept – Accelerating product design through 3D generative engineering and AI.
Uganda • Numida – Using credit models and digital underwriting to provide loans to micro businesses.
Ukraine • Respeecher – Enabling scalable voice cloning across languages and contexts.
United Kingdom • CuspAI – Using frontier AI to accelerate the discovery and development of materials with specific functionalities. • Obrizum – Offering personalized digital learning services at scale through an AI-powered platform. • Oxford Ionics – Building high-performance quantum computers using trapped-ion technology.
United States • Ammobia –Fuelling the world with cost-effective, lower-carbon ammonia production. • Archetype AI – Pioneering a new form of Physical AI capable of perceiving, understanding and reasoning about the world through analysing real-time, multimodal sensor data. • Arine – Integrating cutting-edge AI, clinical expertise and advanced data analytics to deliver medication-based care interventions at the population level. • AstroForge – Making critical minerals more accessible to humanity by mining asteroids. • BforeAI – Using behavioural AI to predict and automatically pre-empt malicious campaigns and stop cyberattacks before they occur. • Candidly – Developing an AI-powered platform to help borrowers manage and overcome educational loans. • Claryo – Helping warehouse operators maximize operational efficiency by leveraging spatial generative AI. • Distyl AI – Enabling enterprises to seamlessly integrate AI agents into operations. • Emvolon – Converting methane emissions into carbon-negative fuels for hard-to-abate sectors onsite. • Exowatt – Delivers solar power on demand by storing energy and converting it into electricity as needed, helping data centres and the grid run on clean energy 24/7. • Foundation Alloy – Commercializing solid-state metals technology to make higher performance metals using less energy. • HAIQU – Developing a new application execution stack for all modalities of near-term quantum computers. • Hertha Metals – Developing technology to decarbonize primary steel production. • Hyfe – “Turns food processing waste into chemicals that replace petroleum in everyday goods”. • Lumu Technologies – Providing cybersecurity operations capabilities to help businesses control the impact of cybercrime. • One Bio – Using biotechnology to add anti-inflammatory plant-based fibres to everyday foods. • Oberon Fuels – Developing innovative carbon-neutral fuels for maritime, propane, and hydrogen sectors. • Osmo – Combining frontier AI and olfactory science to digitize scent and enhance well-being. • Outtake – Securing digital identities by detecting and removing harmful AI-generated content. • Parallel Learning – Providing licensed therapy and instruction to students with learning differences through a digital platform. • Pavilion – Increasing efficiency in US public procurement with an AI-enabled government marketplace. • Reality Defender – Offering multimodal detection of AI-generated media to prevent fraud and disinformation. • RoboForce – Building AI-powered robotic systems designed for high-risk or repetitive work, to enhance efficiency, productivity and safety across industries. • Rubi Laboratories – Using biocatalysis to transform CO2 into essential materials like cellulose. • Shiru – Leveraging AI to identify and develop naturally occurring functional ingredients. • Starcloud – Constructing data centres in space to solve the AI energy challenge. • Waterplan – Delivering an AI-powered platform to measure, manage and mitigate water risk. • Workera – Providing AI-driven workforce skills intelligence and upskilling pathways. • Workhelix – Helping companies identify AI transformation opportunities and measure return on investment.
Uruguay • Prometeo – Creating a single, borderless banking application programming interface to connect companies with financial institutions across the Americas.
About the Annual Meeting of the New Champions 2025 The 16th Annual Meeting of the New Champions will take place from 24 to 26 June 2025 in Tianjin, People’s Republic of China, under the theme “Entrepreneurship for a New Era.” The meeting will convene over 1,700 leaders from business, government, civil society, academia, international organizations, innovation and media to explore entrepreneurial solutions to global challenges.
About the Technology Pioneers Launched in 2000, the Technology Pioneers community marks its 25th anniversary in 2025 as a leading platform for early-stage companies from around the world that are shaping the future through breakthrough technologies and innovations. These companies are selected for their potential to have a significant impact on business and society and are invited to engage with public and private sector leaders through the World Economic Forum’s global platform.
The Technology Pioneers community is part of the Innovator Communities within the Forum’s Centre for the Fourth Industrial Revolution. The Innovator Communities convene the world’s leading global start-ups across different growth stages from early-stage Technology Pioneers to growth-stage Global Innovators and unicorn companies valued at more than $1 billion usd/ $ 1.373 billion cad.
“Pay attention students, write this down for memorization.” The Trivium and Quadrivium, medieval revival of classical Greek education theories, defined the seven liberal arts necessary as preparation for entering higher education: grammar, logic, rhetoric, astronomy, geometry, arithmetic, and music. Even today, the education disciplines identified since Greek times are still reflected in many education systems. Numerous disciplines and branches have since emerged, ranging from history to computer science…
Now comes the Information Age, bringing with it Big Data, cloud computing, artificial intelligence as well as visualization techniques that facilitate the learning of knowledge.
All this technology dramatically increased the amount of knowledge we could access and the speed at which we could generate answers to our questions.
“New and more innovative knowledge maps are now needed to help us navigate the complexities of our expanding landscape of knowledge,” says Charles Fadel. Fadel is the founder of the Center for Curriculum Redesign, which has been producing new knowledge maps that redesign knowledge standards from the ground up. “Understanding the interrelatedness of knowledge areas will help to uncover a logical and effective progression for learning that achieves deep understanding.”
Joining us in The Global Search for Education to talk about what students should learn in the age of AI is Charles Fadel, author of Four-Dimensional Education: The Competencies Learners Need to Succeed.
“We need to identify the Essential Content and Core Concepts for each discipline – that’s what the curation effort must achieve so as to leave time and space for deepening the disciplines’ understanding and developing competencies.” — Charles Fadel
Charles, today students have the ability to look up anything. Technology that enables them to do this is also improving all the time. If I want to solve a math problem, I use my calculator, and if I want to write a report on the global effects of climate change, I pull out my mobile. How much of the data kids are being forced to memorize in school is now a waste of time?
The Greeks bemoaned the invention of the alphabet because people did not have to memorize the Iliad anymore. Anthropologists tell us that memorization is far more trained in populations that are illiterate or do not have access to books. So needing to memorize even less in an age of Search is a natural evolution.
However, there are also valid reasons for why some carefully curated content will always be necessary.
Firstly, Automaticity. It would be implausible for anyone to constantly look up words or simple multiplications – it just takes too long and breaks the thought process, very inefficiently. Secondly, Learning Progressions. A number of disciplines need a gradual progression towards expertise, and again, one cannot constantly look things up, this would be completely unworkable. Finally, Competencies (Skills, Character, Meta-Learning). Those cannot be developed in thin air as they need a base of (modernized, curated) knowledge to leverage.
Sometimes people will say “Google knows everything” or “ask AI” and it is striking, but the reality is that for now, Google stores everything. Of course, with AI, what is emerging now is the ability to analyze a large number of specific problems and make predictions, so eventually, Google and similar companies will know a lot more than humans can about themselves!
Closeup of mobile phone with language learning application in jeans pocket. focus on screen
“What we need to test for is Transfer – the ability to use something we have learned in a completely different context. This has always been the goal of an Education, but now algorithms will allow us to focus on that goal even more, by ‘flipping the curriculum’.” — Charles Fadel
If Child A has memorized the data in her head while Child B has to look up the answers, some might argue that Child A is smarter than Child B. I would argue that AI has leveled the playing field for Child A and Child B, particularly if Child B is digitally literate, creative and passionate about learning. What are your thoughts?
First, let’s not conflate memory with intelligence, which games like Jeopardy implicitly do. The fact that Child A memorized data does not mean they are “smarter” than Child B, even though memory implies a modicum of intelligence. Second, even Child B will need some level of content knowledge to be creative, etc. Again, this is not developed in thin air, per the conversation above.
So it is a false dichotomy to talk about Knowledge or Competencies (Skills/Character/Meta-learning), it has to be Knowledge (modernized, curated) and Competencies. We’d want children to both Know and Do, with creativity and curiosity.
Lastly, we need to identify the Essential Content and Core Concepts for each discipline – that’s what the curation effort must achieve so as to leave time and space for deepening the disciplines’ understanding and developing competencies.
Given the impact of AI today and the advancements we expect each year, when should (all) school districts introduce open laptop examinations to allow students equal access to information and place emphasis on their thinkingskills?
The question has more to do with Search algorithms than with AI, but regardless, real-life is open-book, and so should exams be alike. And yes, this will force students to actually understand their materials, provided the tests do more than multiple-choice trivialities, which by the way we find even at college levels for the sake of ease of grading.
What we need to test for is Transfer – the ability to use something we have learned in a completely different context. This has always been the goal of an Education, but now algorithms (search, AI) will allow us to focus on that goal even more, by “flipping the curriculum”.
Today, if a learner wants to do a deep dive into any specific subject, AI search allows them to do this outside of classroom time. What do you say to a history teacher who argues there’s no need to revise subject content in his classroom?
For all disciplines, not just History, we must strike the careful balance between “just-in-time, in context” vs “just-in-case”. Context matters to anchor the learning: in other words, real-world projects give immediate relevance for the learning, which helps it to be absorbed. And yet projects can also be time-inefficient, so a healthy balance of didactic methods like lectures are still necessary. McKinsey has recently shown that today that ratio is about 25% projects, which should grow a bit more over time as education systems embed them better, with better teacher training.
Second, it should be perfectly fine for any student to do deep dives as they see fit, but again in balance: there are other competencies needed to becoming a more complete individual, and if one is ahead of the curve in a specific topic, it is of course very tempting to follow one’s passion. And at the same time, it is important to make sure that other competencies get developed too. So, balance and a discriminating mind matter.
Employers consider ethics, leadership, resilience, curiosity,mindfulness and courage as being of “very high” importance to preparing students for the workplace. How does your curriculum satisfy employers’ demands today and in the years ahead?
These Character qualities are essential for employers and life needs alike, and they have converged away from the false dichotomy of “employability or psycho-social needs.” A modern curriculum ensures that these qualities are developed deliberately, systematically, comprehensively, and demonstrably. This is achieved by matrixing them with the Knowledge dimension, meaning teaching Resilience via Mathematics, Mindfulness via History, etc. Employers have a mixed view and success as to how to assess these qualities, so it is a bit unfair that they would demand specificity they do not have. And it is also unfitting of school systems to lose relevance.
people, education, technology and exam concept – close up of students with smartphones taking picture of books page and making cheat sheet in school library
“Educators have been tone-deaf to the needs of employers and society to educate broad and deep individuals, not merely ones that may go to college. The anchoring of this problem comes from university entrance requirements.” — Charles Fadel
There is a significant gap between employers’ view of the preparation levels of students and the views of students and educators. The problem likely exists partly because of incorrect assumptions on both sides, but there are also valid deficiencies. What specific inadequacies are behind this gap? What system or process can be devised to resolve this issue?
On one side, employers are expecting too much and shirking their responsibility to bring up the level of their employees, expecting them to graduate 100% “ready to work” and having to spend nothing more than job-specific training at best. On the other side, educators have been tone-deaf to the needs of employers and society to educate broad and deep individuals, not merely ones that may go to college.
The anchoring of this problem comes from university entrance requirements (in the US, AP classes, etc.) and their associated assessments (SAT/ACT scores). They have for decades back-biased what is taught in schools, in a very self-serving manner – narrowly as a test of whether a student will succeed at university. It is time to deconstruct the requirements to broaden/deepen them to serve multiple stakeholders. For the Silo, C.M. Rubin.
(All photos are courtesy of our friends at CMRubinWorld)
C. M. Rubin and Charles Fadel
Join me and globally renowned thought leaders including Sir Michael Barber (UK), Dr. Michael Block (U.S.), Dr. Leon Botstein (U.S.), Professor Clay Christensen (U.S.), Dr. Linda Darling-Hammond (U.S.), Dr. MadhavChavan (India), Charles Fadel (U.S.), Professor Michael Fullan (Canada), Professor Howard Gardner (U.S.), Professor Andy Hargreaves (U.S.), Professor Yvonne Hellman (The Netherlands), Professor Kristin Helstad (Norway), Jean Hendrickson (U.S.), Professor Rose Hipkins (New Zealand), Professor Cornelia Hoogland (Canada), Honourable Jeff Johnson (Canada), Mme. Chantal Kaufmann (Belgium), Dr. EijaKauppinen (Finland), State Secretary TapioKosunen (Finland), Professor Dominique Lafontaine (Belgium), Professor Hugh Lauder (UK), Lord Ken Macdonald (UK), Professor Geoff Masters (Australia), Professor Barry McGaw (Australia), Shiv Nadar (India), Professor R. Natarajan (India), Dr. Pak Tee Ng (Singapore), Dr. Denise Pope (US), Sridhar Rajagopalan (India), Dr. Diane Ravitch (U.S.), Richard Wilson Riley (U.S.), Sir Ken Robinson (UK), Professor Pasi Sahlberg (Finland), Professor Manabu Sato (Japan), Andreas Schleicher (PISA, OECD), Dr. Anthony Seldon (UK), Dr. David Shaffer (U.S.), Dr. Kirsten Sivesind (Norway), Chancellor Stephen Spahn (U.S.), Yves Theze (LyceeFrancais U.S.), Professor Charles Ungerleider (Canada), Professor Tony Wagner (U.S.), Sir David Watson (UK), Professor Dylan Wiliam (UK), Dr. Mark Wormald (UK), Professor Theo Wubbels (The Netherlands), Professor Michael Young (UK), and Professor Minxuan Zhang (China) as they explore the big picture education questions that all nations face today.
C. M. Rubin is the author of two widely read online series for which she received a 2011 Upton Sinclair award, “The Global Search for Education” and “How Will We Read?” She is also the author of three bestselling books, including The Real Alice in Wonderland, is the publisher of CMRubinWorld and is a Disruptor Foundation Fellow.
The Metropolitan Museum of Art Launches New Immersive Virtual Reality and Online Feature with Iconic Works from Its Collection The Temple of Dendur and works from the Arts of Oceania galleries have been transformed for virtual reality (VR) experience and on the web
The Met’s new features, created in collaboration with the platform Atopia, introduce a new way for cultural institutions around the world to build their own VR and online exhibitions(New York, November, 2025)— The Metropolitan Museum of Art has launched two new virtual reality (VR) features, Dendur Decoded and Oceania: A New Horizon of Space and Time, that explore the Museum’s beloved Temple of Dendur and monumental works from the Oceanic art collection in the newly reopening Michael C. Rockefeller Wing—such as the Ceremonial House Ceiling from the Kwoma people of Papua New Guinea, the Asmat bisj poles, and Atingting kon(slit gongs) from Vanuatu—in 3D. The experiences will allow global audiences to view these treasured galleries and works using a personal VR headset or on The Met’s website. Designed in collaboration with Atopia, a platform for immersive art and culture, The Met’s virtual experiences introduce a new way for art institutions to create and publish their own VR and web features, providing more digital access to VR innovations across the museum field.
The Met’s first VR experiences, Dendur Decoded and Oceania: A New Horizon of Space and Time were developed in close consultation with Met curators. They feature original, innovative storytelling and high-resolution 3D scans created by The Met’s Imaging team. This experience allows virtual visitors to delve into artworks through movement, sound, interaction, and play. From stepping inside the Temple of Dendur to bringing the 17-foot bisj poles to eye level, these virtual experiences offer a singular opportunity to explore these iconic works.
“The Met collection is enjoyed by millions of visitors a year, and by exploring the vast possibilities of virtual spaces, we can offer unparalleled cultural experiences to audiences no matter where they are located,” said Max Hollein, The Met’s Marina Kellen French Director and CEO. “These two new VR and web features foreground compelling storytelling and curatorial scholarship, and they provide immersive, participatory access to some of The Met’s remarkable works of art.”
Annabell Vacano, founder of Atopia, said, “Until now, immersive exhibitions were bespoke and expensive. We created Atopia so museums of all sizes could design, publish, and scale interactive storytelling so their collections can be accessed from anywhere in the world. The Met has been an incredible partner in designing Atopia’s storytelling tools, and it’s been an honor to work with their world-class teams.”
Dendur Decoded The Dendur DecodedVR and web experience is organized as a vividly detailed adventure arranged in four “acts” and includes over 150 newly presented pieces of content, including materials (images and video) from archives at The Met and UNESCO. The content was created in collaboration with Isabel Stünkel, Curator, Department of Egyptian Art, and Erin Peters, Assistant Professor, Art History & Visual Culture at Appalachian State University; with support from Diana Craig Patch, Lila Acheson Wallace Curator in Charge of Egyptian Art, and Janice Kamrin, Curator in Egyptian Art at The Met.
It begins with “Act I: Explore Dendur,” which introduces the Temple and helps visitors learn how to read aspects of the temple’s decoration, and continues with “Act II: Dendur in Nubia,” presenting a 3D and 360-degree film about the Temple of Dendur’s original location along the West bank of the Nile River and how it was dismantled as part of the international UNESCO Campaign to Save the Monuments of Nubia to protect it from being submerged beneath Lake Nasser and then awarded to the United States in 1967. “Act III: Reconstructing Dendur” invites visitors to virtually rebuild part of the temple and learn how The Met reassembled it in New York in a new gallery that was opened to the public on September 27, 1978. “Act IV: Reflection” showcases past MetLiveArts performances and the ways in which contemporary artists have been inspired by the Temple. There is also an optional opportunity to leave a personal contemplation or observation through a voice note.
Oceania: A New Horizon of Space and Time Oceania: A New Horizon of Space and Time celebrates the dazzling Oceanic works in the Museum’s newly reopened Michael C. Rockefeller Wing. Fifteen objects are contextualized with sound, story, and a spatial design inspired by an outdoor environment that evokes the Pacific Islands. Within the space, these objects are accompanied by illuminating content such as immersive original audio and Pacific storytelling, archival imagery, 360-degree video, and high-resolution 3D models. Featuring works from across The Met collection of Oceanic art, highlights in the VR and web experience include The Met’s impressive Ceremonial House Ceiling, which evokes the polychrome interior of a men’s ceremonial house in the Sepik River region of Papua New Guinea five soaring upright spirit poles (bisj) from the Asmat people of Western New Guinea; and the 14-foot-tall Atingting kon (slit gong) from Vanuatu.
In this exploratory environment there is a lush virtual gallery populated by the 3D-scanned objects and immersive soundscapes. Examples include the Sawos Ancestor Figure, which invites close looking through a compelling audio story about a battle in which the ancestral figure came to life, paired with an interactive 3D model. The Ceremonial House Ceiling includes a game where visitors discover motifs across the 270 pangal (painted panels), including crocodiles, insects, and cassowaries. The Body Mask, created by an Asmat artist, includes contemporary photography by Joshua Irwandi, a documentary photographer based in Jakarta, Indonesia, showing how these masks are made and worn by the Asmat people of southwest New Guinea. For the Silo, Jarrod Barker.
Developed along with Maia Nuku, The Met’s Evelyn A. J. Hall and John A. Friede Curator for Arts of Oceania, and Sylvia Cockburn, Senior Research Associate for Arts of Oceania, the experience will be animated with voices from across the Pacific Islands, including a greeting by Michael Mel (PhD, performance artist, lecturer, curator, and teacher and currently Senior Lecturer and Head of Expressive Arts Department at the University of Goroka), and a concluding sunset ceremony by Che Wilson (Ngāti Rangi-Whanganui, Tūwharetoa, Mōkai Pātea, Ngāti Apa, Ngā Raurua), a Māori leader with a career that spans cultural advocacy, governance, and leadership.
VR and Online Innovations for the Cultural Sector For The Met’s virtual experiences, the Museum’s Emerging Technology and Digital department worked collaboratively with Atopia to develop a feature that will enable museums of all sizes to design and publish similar immersive exhibitions in-house. Through a “no-code” editor available on the platform, museum curators and designers can drag and drop images, 3D scans, and didactic information from their collections into virtual spaces. These can then be launched on the platform, becoming instantly available on the web and in VR.
Access and Availability The two immersive exhibitions are available now for free on The Met’s website and on Meta Quest 2/3/3s Audio across the experience is closed caption.
Atopia is compatible with both standard web browsers on a desktop and laptop and on personal VR headsets. It also supports both individual and invite-only multiplayer visits.
Related Programs These VR and web features will also be activated through several events, including Met Expert Talks. These talks include the opportunity for Museum visitors to interact with the virtual experiences on headsets provided by The Met for a deeper and more contextualized viewing. There will also be VR pop-ups at Teens Take The Met on May 15, 2026, as well as during an upcoming Teen Friday Career Labs, where teens can hear directly from the VR creative team. For homebound audiences unable to visit the new Arts of Oceania galleries in person, special Collection Tours will be offered for Oceania: A New Horizon of Space and Time via headsets provided by the Museum. More details and VR events at The Met will be announced.
Credits Dendur Decoded and Oceania: A New Horizon of Space and Time were created with a cross-disciplinary team from across The Met, led by Brett Renfer, Senior Project Manager of Emerging Technologies, along with Curatorial, Education, Imaging, and Digital.
This project is made possible by the Director’s Fund.
About The Metropolitan Museum of Art The Met presents art from around the world and across time for everyone to experience and enjoy. The Museum lives in two iconic sites in New York City—The Met Fifth Avenue and The Met Cloisters. Millions of people also take part in The Met experience online. Since it was founded in 1870, The Met has always aspired to be more than a treasury of rare and beautiful objects. Every day, art comes alive in the Museum’s galleries and through its exhibitions and events, revealing both new ideas and unexpected connections across time and across cultures. Discover more at metmuseum.org.
About Atopia Atopia is a new way to experience culture online. From any web browser or VR headset, audiences can step inside immersive exhibitions designed by leading museums worldwide. Our no-code platform empowers cultural institutions to create and share virtual experiences at scale—bringing exhibitions to global audiences beyond physical walls. Our mission: to open access to culture everywhere. Discover more at https://atopia.space
You know the look: A long, low-slung sedan finished in shiny black paint with equally bright chrome rolls through town. Beige, burgundy, and blue cars move out of the way, magnetically repelled by the menacing four-door.
This threatening style has been idolized by Hollywood since the 1960s, perhaps most famously in the unfortunately short-lived ABC television program The Green Hornet, in which actor Van Williams drove a Chrysler Imperial modified by Dean Jeffries. It was painted black, of course, and the chrome slats that ran horizontally across its huge grille clearly meant business—even on the 19-inch TV screens that took up considerable living room real estate in a 1960s home.
Black paint, while popular today, was a daring, high-style choice in the 1960s that was not-so-subtly influenced by the largely chauffeur-driven cars that carried around heads of state and other major politicians. For instance, the Soviet Union’s KGB notoriously drove around in black-painted GAZ Chaika sedans that had a distinctly Detroit-inspired appearance. (The irony of which seems to have been lost.)
An outsider might not expect Japan, where the pavement has been specifically engineered to be quiet, to have a small but mighty homegrown industry producing the world’s most ominous cars.
Nissan
The Japanese Royal Family Needed a Ride of Their Own
Dating back more than 1400 years, Japan’s Imperial Household Agency does just what its name suggests: it manages the royal family’s affairs. This is no easy task for a country so steeped in tradition. In fact, the Imperial Household Agency has more than 1000 civil servants, which stands in marked contrast to the self-funded, non-governmental managers of, say, the British and Swedish royal families.
The Imperial Household Agency’s wide-ranging list of tasks includes everything from ensuring that the Emperor’s family is comfortable and healthy to organizing and overseeing ceremonies. In the early 1960s, the Imperial Household Agency called automakers together and told them to submit designs for an official state vehicle. The car needed to have four doors, be reasonably spacious, and have a prestigious but not overly ostentatious appearance.
Nissan
Prior to World War II, the Emperor’s vehicle fleet consisted of large, imported cars from brands like Rolls-Royce and Daimler. The company’s nascent automotive industry focused on small, mostly work-oriented vehicles. By the early 1960s, Japan’s recovery from the war’s devastating effects was well underway, fueled heavily by Western investment. While Japan didn’t give up on its traditions, the bright lights of Tokyo had a strong American influence. So too did the country’s cars, like the Toyota Crown that looked like last season’s Chevy. So when the Imperial Household Agency came calling, it should come as no surprise that the results looked rather Detroit-ish.
The winner was a brand you might not have heard of: Prince Motor Company. Founded in 1947, Prince was Japan’s short-lived flagship automaker in the early 1960s, though it was in the midst of being folded into Nissan.
The Prince Royal that got the royal nod, so to speak, was based on the Prince Gloria, a vehicle already used by the Japanese government in an official capacity. The Prince Royal was extended to provide those in back with stretch-out legroom, and the rear doors were modified to open coach-style for easier and more elegant access. While not a particularly showy car, the Prince Royal has an understated elegance. Its stacked headlights recall the Ford Galaxie and the big W108-generation Mercedes-Benz models. The tall greenhouse, on the other hand, is a nod to practicality rather than style. Inside, in the Japanese luxury tradition, the wool seats make nary a peep as passengers slide across. Leather would be rather squeakier.
The Prince Royal gained the Imperial Household Agency’s nod as transport for the Emperor of Japan. These cars served until 2006, when they were replaced by a special version of the Toyota Century.Nissan
Underhood, the Prince Royal utilized a 6.4-liter V-8—not Japan’s first, but only a couple of years after the so-called “Toyota Hemi.” An eight-cylinder design was, admittedly, an odd choice; while inherently fairly smooth, the engine was undoubtedly a costly thing to develop. Fewer than 10 were ever built, one of which lives at the unusual and yet highly appealing Nissan Engine Museum and Guest Hall next to the company’s powertrain factory in Yokohama, Japan.
Just five Prince Royals were built, and they stayed in service for a staggering 40 years, when they were replaced by a limousine version of the Toyota Century. But the Century doesn’t really owe its status to the Prince Royal. It should thank the Nissan President, a model that was developed back when Nissan and Prince were quasi-competitors.
Into the 1980s, the Nissan President retained a classic, but hardly ostentatious, look as seen on this 1982 President Type-CNissan
The President, as its name suggests, was intended from the start as a government vehicle. Unlike Toyota’s Crown, the first Japanese car to use a V-8, the President was developed in direct response to the Imperial Household Agency’s request. At nearly 200 inches long, the President was a very large sedan by Japanese standards. Its styling is contemporary if a bit bland, even in comparison to the Prince Royal. Horizontal headlights embedded in a broad, generic grille give way to fenders that had an almost Ford Falcon modesty to them. There’s a bit more drama at the rear with big NISSAN badging. Copious chrome lines the rocker panels.
While the Prince Royal ended up being chosen to transport the Emperor, Nissan’s President didn’t go home empty-handed. Instead, it was used by the country’s Prime Minister. Government versions were only minimally modified compared to the President models sold through Nissan’s dealership network in Japan, though official-use models were invariably painted black. Those available to consumers came in a slightly wider range of colors. The President was a sign that its owner—and, most likely, the person riding in the back—had arrived. It was the Lincoln Continental of its era. Today, when government spending is closely watched by a hawkish public, there is no U.S.-market comparison.
In Japan, fabric upholstery like the wool seen in the 1973 Nissan President remains an indicator of a high-end vehicle because it makes no sound as a human slides across it.Nissan
Nissan didn’t dominate government contracts, but it was a commanding presence into the late 1980s. Then, almost inexplicably, the brand gave up. Its chrome-laden second-generation President, which was based on an early 1970s design, was replaced with a comparatively plebian design that would be sold in the U.S. as the Infiniti Q45. That’s not to say that the Q45 was a dud, but its big plastic bumpers and, in Japanese-market spec, Jaguar-ish grille were not in keeping with tradition. The Imperial Household Agency famously rejected a stretched version of the 1990 President in favor of the Toyota Century.
Toyota’s Century Begins
The original Toyota Century was overshadowed, at least to a degree, by the Nissan President that beat it to the market in Japan and initially secured more government contracts.Toyota
Thanks in part to the floodgates of 25-year-old vehicles from Japan, the Toyota Century has something of a cult status among enthusiasts in the U.S. today. It was not always this way; while the Century was undoubtedly a high-tech vehicle at its 1967 debut, the Imperial Household Agency initially passed it up in favor of the Nissan President. However, the Century’s rise coincided with Toyota’s phenomenal growth in the 1970s and 1980s, when it began to overtake Nissan as the premier Japanese automaker.
The original Century ran for three decades, always with V-8 power. Despite the fact that its specs and power could have appealed to buyers in Europe and, especially, the U.S., it was rarely sold in left-hand-drive markets. (Toyota flirted with the idea in the early 2000s before concluding that the conservative Century would be no match for the comparatively flamboyant Mercedes-Benz S-Class.)
Toyota
Yet it’s the Century that endures in Japan, an icon in its own time. The Emperor of Japan rides around in a stretched one, approved by the Imperial Household Agency, of course. The redesigned model that arrived in 2018 carries on the 1960s original style in marked contrast to the edgy, modern look found in any Toyota or Lexus model. There’s even an SUV version now, though its front-wheel-drive architecture and hybrid V-6 powertrain mean it’s more like a snazzy Toyota Highlander than a bespoke Emperor-hauler.
Toyota
Clearly, the Century has won out, so much so that Toyota recently announced it will position the Century as its own brand as a more conservative sibling to Lexus. It did face some limited competition from Mitsubishi with its mid-1960s Debonair. While the Mitsubishi, with its slab sides and fenders that leap forward past its grille, is basically a rolling villain, the four- or six-cylinder sedan lacked the interior volume and the power to compete with the Century or the President. Its angular 1986 replacement, which looked sort of like a K-Car with fender mirrors, was anything but debonair.
Though its effort was comparatively short-lived, the Mitsubishi Debonair boasted a fantastic name and slab-sided Lincoln Continental-inspired looks, if not Conti-style proportions.Mitsubishi
The Yakuza Turns State Cars Into Mafia Cars
Nobody does organized crime like the Japanese—and that is not meant as a compliment. The Yakuza, as the Japanese crime syndicates are broadly known, hit its peak right around the time when the decidedly more upstanding Imperial Household Agency was asking automakers to design a state vehicle.
Those vehicles were soon appropriated by the Yakuza. In retrospect, they have a sinister, angry look. If the bad guy in a period flick drives a car in Tokyo, it’ll be a President, a Century, or perhaps an early Debonair. Set in 1999, HBO’s Tokyo Vice puts the Q45-adjacent Nissan President front and center. While it may not have been the vehicle of choice for the Emperor, that era’s President was the car to have for the heads of organized crime. Perhaps that’s why Nissan steered away from tradition with its final redesign, a swoopy model unsuccessfully sold here as the Infiniti Q70.
The 1990 Nissan President abandoned the 1960s-style chrome bumpers of its predecessors.Nissan
These big, black sedans have an authoritarian presence. Their drivers may think they have impunity. Not only are their cars imposing, but they look official—even if those inside are doing anything but official business. Yakuza members often mounted curtains inside their Presidents and Centurys, a style known as VIP that persists today—albeit in a much broader and harder-to-define look.
We have no direct equivalent in Canada or the US., at least in terms of how the criminal underground appropriated cars meant for high-ranking government officials. The Crown Victorias once favored by Canadian and American cops lack the luxury and exclusivity of a Century or President. A Chevy Tahoe can’t be all that menacing if you can find dozens of them in the carpool line at your local elementary school. And while our head of state has long had a highly modified Cadillac-ish limousine, which has been described as a tank with a limousine body, it lacks a showroom counterpart. That said, the crested wreath brand made a strong appearance in the late-1990s/early-2000s setting of HBO’s The Sopranos.
It’s a different story in Japan, though. There, a government official arrives in black-and-chrome style—as dictated, if indirectly—by the edicts set forth by the Imperial Household Agency. The automotive equivalent of a tuxedo is, after all, always in style. For the Silo, Andrew Ganz/Hagerty.
AWS Outage Created “Perfect Storm” for Social Engineering Attacks
Last week Amazon Web Services (AWS) went down worldwide, including here in Canada, causing a ripple effect, from governments and local municipalities, to enterprises, small businesses and the individuals who rely on these services daily.
AWS is a cloud-based service thousands of major companies use to not only store their data, but run their apps and software for many critical business services.
Whether basic communications using apps such as Snapchat, Signal and Reddit to airlines such as Delta and United reporting disruptions to their customer facing operations, when these services go down it highlights the reliance on just a few cloud services companies (AWS, Microsoft Azure, and Google Cloud) to ‘run the country’ so to speak.
The AWS outage has further impacted shopping websites, banking apps, and even streaming and smart homes devices.
And while organizations scramble to ensure business operations continue to run, it’s also an opportunity for individuals to do a quick check-in on their own cyber hygiene.
Cybercriminals and hackers can easily take advantage of these types of outages to deploy an array of social engineering attacks.
Whether in the office or at home, nothing is more frustrating than losing the ability to access files and documents, and communicate with business associates or loved ones, especially in an emergency or crisis.
Hackers who rely on mass urgency and panic will see this as an opportunity to take advantage of people’s heightened emotions with phishing emails offering to “fix” the issue and get you back online and into your accounts or apps.
But in reality, these scammers are looking to steal your personal information, such as login credentials by tricking you into updating your software or resetting your password.
During major outages, users should avoid clicking on any links in emails, texts and pop-ups claiming to be able to fix the outage.
Additionally, double check that any alerts or update messages from organizations, such as your bank or payment apps, are verified from the official website or app.
This is the time to make sure you are using a strong password and multifactor authentication to prevent any unauthorized access to your accounts.
Delay Things
However, individuals should also delay making sensitive transactions, such as major financial transactions, resetting your password, or installing critical software updates, until the service in question has been announced as officially restored.
Furthermore, when the service disruption has ended, users should also monitor any affected accounts for unusual activity, discrepancies, and duplicate or fraudulent transactions.
Finally, this is an excellent reminder for individuals to make sure they have a back-up system in place to access important documents and for communications.
This can be as easy as keeping a secondary email account or even a back-up mobile phone. For the Silo, Stefani Schappert.
ABOUT THE AUTHOR
Stefanie Schappert, MSCY, CC, Senior Journalist at Cybernews, is an accomplished writer with an M.S. in cybersecurity, immersed in the security world since 2019. She has a decade-plus experience in America’s #1 news market working for Fox News, Gannett, Blaze Media, Verizon Fios1, and NY1 News. With a strong focus on national security, data breaches, trending threats, hacker groups, global issues, and women in tech, she is also a commentator for live panels, podcasts, radio, and TV. Earned the ISC2 Certified in Cybersecurity (CC) certification as part of the initial CC pilot program, participated in numerous Capture-the-Flag (CTF) competitions, and took 3rd place in Temple University’s International Social Engineering Pen Testing Competition, sponsored by Google. Member of Women’s Society of Cyberjutsu (WSC), Upsilon Pi Epsilon (UPE) International Honor Society for Computing and Information Disciplines.
ABOUT CYBERNEWS
Friends of The Silo, Cybernews is a globally recognized independent media outlet where journalists and security experts debunk cyber by research, testing, and data. Founded in 2019 in response to rising concerns about online security, the site covers breaking news, conducts original investigations, and offers unique perspectives on the evolving digital security landscape. Through white-hat investigative techniques, Cybernews research team identifies and safely discloses cybersecurity threats and vulnerabilities, while the editorial team provides cybersecurity-related news, analysis, and opinions by industry insiders with complete independence.
Cybernews has earned worldwide attention for its high-impact research and discoveries, which have uncovered some of the internet’s most significant security exposures and data leaks. Notable ones include:
Cybernews researchers discovered multiple open datasets comprising 16 billion login credentials from infostealer malware, social media, developer portals, and corporate networks – highlighting the unprecedented risks of account takeovers, phishing, and business email compromise.
Cybernews researchers analyzed 156,080 randomly selected iOS apps – around 8% of the apps present on the App Store – and uncovered a massive oversight: 71% of them expose sensitive data.
Recently, Bob Dyachenko, a cybersecurity researcher and owner of SecurityDiscovery.com, and the Cybernews security research team discovered an unprotected Elasticsearch index, which contained a wide range of sensitive personal details related to the entire population of Georgia.
Audio-Technica expands turntable accessory offerings for all vinyl enthusiasts
Stow, OH, October, 2025 — Our friends at Audio-Technica, a leading innovator in transducer technology for over 60 years, are excited to launch a new range of turntable accessories designed to help vinyl listeners achieve the best from their record collections. The latest additions include two new slip-mats, precision alignment tools and a stainless-steel disc stabilizer. These new additions join Audio-Technica’s established lineup of turntable accessories including the AT6012 Record Cleaning Kit, stylus cleaners and more, expanding a complete family of products designed to help vinyl users care for and enjoy their collections to the fullest.
New to the Audio-Technica Slipmat series is the AT-SMCR2 Cork-Rubber Slipmat (MAP: $35.00 usd/ $49.00 cad) and AT-SMC1 Cork Slipmat (MAP: $25.00 usd/ $35.00 cad). The AT-SMCR2 is engineered from a premium blend of cork and rubber to absorb a wide range of vibrations, particularly at lower frequencies, delivering clearer audio reproduction. The cork-rubber blend also provides antistatic properties to reduce pops and clicks caused by static discharge. For listeners seeking a simpler option, the AT-SMC1 provides excellent resonance control and a stable playback surface without shedding particles or attracting dust like traditional felt mats.
Beyond vibration control, Audio-Technica introduces two new cartridge alignment tools designed to ensure precise playback geometry: the AT-VTAZ1 Azimuth + VTA Alignment Tool (MAP: $14.00 usd/ $20.00 cad) and AT-CAP1 Cartridge Alignment Protractor ($17.00 usd/ $24.00 cad). The AT-VTAZ1 allows users to achieve accurate tonearm height and cartridge azimuth adjustment. Proper alignment ensures even stylus wear, accurate channel balance, and minimal distortion. The AT-CAP1 utilizes the widely used Baerwald alignment method to set cartridge offset angle and null points to deliver optimal tracking and reduced distortion.
The new AT628a Stainless Steel Disc Stabilizer (MAP: $79.00 usd/ $111.00 cad )is designed to minimize resonance and keep records firmly in place during playback. The stabilizer accommodates even slightly warped records with two recessed inner rings on its underside for secure contact.
Rounding out the new launches are the AT-ST3 Speaker Stands (MAP: $59.00 usd/ $83.00 cad), designed to enhance the performance of the AT-SP3X or other similarly sized bookshelf speakers. Constructed from rigid alloy steel with vibration-damping cork feet, each stand provides stable support for speakers weighing up to 3 kg (6.6 lb). The 13-degree angled design directs sound upward for clearer projection and helps reduce sound wave reflections off hard surfaces, ensuring cleaner, more accurate audio reproduction.
For the Silo, Jarrod Barker.
Audio-Technica was founded in 1962 with the mission of producing high-quality audio for everyone. As we have grown to design critically acclaimed headphones, turntables and microphones, we have retained the belief that great audio should not be enjoyed only by the select few, but accessible to all. Building upon our analog heritage, we work to expand the limits of audio technology, pursuing an ever-changing purity of sound that creates connections and enriches lives.
ALUULA Composites, super-strong, lightweight polyethylene material is now being used to develop expandable habitats for NASA’s astronauts to live safely and comfortably on the moon for the 2027 planned landing.
This small company on Canada’s west coast is playing a big role to help astronauts return and orbit the moon in 2026.
Artemis II crew members (from left) CSA (Canadian Space Agency) astronaut Jeremy Hansen, and NASA astronauts Christina Koch, Victor Glover, and Reid Wiseman walk out of Astronaut Crew Quarters inside the Neil Armstrong Operations and Checkout Building to the Artemis crew transportation vehicles prior to traveling to Launch Pad 39B as part of an integrated ground systems test at Kennedy Space Center in Florida photo: NASA
ALUULA Composites recently signed an agreement with Max Space, an American company, to use its innovative composite material to build space habitats on the moon. The company’s ultra-high-molecular-weight polyethylene (UHMWPE) laminate will be used to create a large living and working area for NASA’s astronauts when they return to the moon in September 2026.
The innovative material was selected because it has eight times the strength-to-weight ratio of steel and is extremely durable, which is ideal for space travel.
The Max Space team with their new expandable space habitat. photo: Max Space
The first Max Space inflatable space habitat is slated to launch with SpaceX in 2026. The Max Space inflatables can be delivered into space in very small packages and then unfolded and expanded to create a much larger work space. For the Silo, Paul Clarke.
A recent consumer survey backed by similar results from Elon University reveals that AI adoption for car shopping is skyrocketing, rapidly becoming a standard part of the automobile buying process. This as fully one in four buyers have already used AI tools this year to research, compare prices, negotiate and otherwise outsmart dealerships, and an overwhelming 88% found it helpful. Signaling a seismic shift in the way North Americans are now shopping for cars, nearly half of consumers indicated plans to use AI in their next purchase. Not just for buyer benefits, dealerships are gleaning critical business intelligence from AI to inform sales strategies, train staff and elevate customer engagement. The below report from our friends at CarEdge, which offers its own AI Negotiator car buying tool saving shoppers thousands, details the first data-backed look at how AI tools are reshaping the car buying experience.
Mornine- AI powered car dealership robot.
Study: 1 in 4 Car Buyers Tap AI for Better Deals
Artificial intelligence is changing the way North Americans buy cars, and it’s a transition that is happening quickly. In the first-ever survey of its kind, CarEdge asked 500 car shoppers if they’re using AI tools like ChatGPT to research, compare, and negotiate during the car buying process. The results confirm a major shift is underway. One in four car buyers in 2025 are already using AI tools to gain an edge, and future buyers are even more likely to embrace these technologies.
Car buyers are finding AI to be a valuable tool. Among those who used tools like ChatGPT, Perplexity, Google Gemini, and others, 88% said it was helpful. AI is quickly becoming a trusted co-pilot for car buyers.
Key Findings: Car Buying Is Changing
The 2025 CarEdge AI & Car Buying Survey reveals a clear and growing trend: AI tools are quickly becoming part of the car buying process for a significant portion of consumers. Here are the standout findings:
1 in 4 Car Buyers Use AI
25% of car buyers in 2025 say they used or plan to use AI tools like ChatGPT during the shopping or buying process. This contrasts with a recent survey by Elon University that found 52% of Americans now use AI large language models. While signs point towards increased adoption of AI tools, the CarEdge survey found that most car buyers are still in the early stages of integrating these tools into high-stakes decisions like vehicle purchases. This suggests there’s still significant room for growth in AI adoption amongst car buyers.
AI Use Is Accelerating
Among those who haven’t bought a car yet this year, 40% say they are using or plan to use AI tools during their search or deal-making. This is nearly 3x higher than the 14% seen among those who already bought a car earlier in the year.
AI Tools Deliver Results
Among those who used AI:
88% say the tools were helpful
32% found them very helpful
60% used them “a lot” during the process
The AI Holdouts: Drivers Who Lease
Of the respondents who had already leased a car in 2025, none reported using any AI tools.
The AI-Adopting Buyer: Who’s Using It, and How?
AI adoption among car buyers is still in its early stages, but clear trends are beginning to emerge.
Among Buyers Who Already Purchased in 2025:
Just 14% of those who already bought a vehicle this year used AI tools during the process. Adoption rates were nearly identical across new and used buyers, with 14% in each group saying they used AI tools.
Among Future Car Buyers:
The numbers jump significantly when looking at those who haven’t yet bought in 2025. Among this group — who represent 39% of total respondents — 40% say they either already use or plan to use AI tools during their car search and buying process.
That’s more than triple the current usage rate among recent buyers, suggesting AI adoption is accelerating as awareness grows and tools become easier to use.
This group also appears to be more proactive: 60% of those who used AI tools during their buying journey said they used them “a lot,” while 40% used them only occasionally.
What Car Buyers Are Using AI Tools
AI tools are quickly becoming essential research companions for car shoppers looking to make more informed, confident decisions. After all, why go it alone when a wealth of automotive knowledge powered by large language models (LLMs) is right in your pocket?
Among buyers who used AI tools during their car purchase or lease process, here’s how they put them to work:
88% — Researching Vehicles
The most common use by far, AI tools helped buyers learn about different models, trims, features, and reliability. For many, it was like having an always-available expert to explain the pros and cons of their options.
64% — Comparing Prices and Market Values
Buyers used AI to better understand fair pricing, from invoice pricing to out-the-door.
44% — Learning Negotiation Strategies
Nearly half of AI users leaned on these tools to prepare for conversations with salespeople. Whether role-playing negotiation scenarios or asking how to spot add-on fees, this group used AI to level the playing field at the dealership.
11% — Exploring Finance and Lease Options
A much smaller portion of buyers used these tools to become familiar with leasing vs. financing, how to calculate payments, and similar queries.
Industry Implications
Car buying has always been tilted in favor of the dealership. Information asymmetry — what the dealer knows versus what the customer knows — has long been the source of consumer frustration, confusion, and overpayment.
That dynamic is beginning to shift.
This survey confirms what many in the industry are only starting to realize: AI is giving car buyers the upper hand. Tools like ChatGPT are helping consumers cut through the noise, ask smarter questions, and avoid common dealership traps. Instead of relying on guesswork or scattered advice, buyers are turning to AI for fast, personalized guidance at every step.
But one auto industry veteran has words of caution for buyers relying heavily on AI tools.
“It’s both surprising and a little scary to see how quickly people are turning to AI to guide such a major financial decision,” said Ray Shefska, Co-Founder of CarEdge. “While tools like ChatGPT can be powerful, they’re only as good as the data behind them. AI should complement your research, not replace your own critical thinking.“
That perspective underscores the real takeaway of this report: AI works best when it’s used thoughtfully as a tool, not as a crutch. In an age where automation raises fears of job loss or decision-making without human oversight, this survey offers a more optimistic view — one where technology helps everyday consumers make smarter choices. Used wisely, AI can help level the playing field and bring more transparency and fairness to the car buying experience.
Methodology
This survey was conducted by CarEdge between June 19 and June 24, 2025. A total of 500 U.S. respondents participated, recruited through the CarEdge email newsletter and social media channels. Questions were tailored based on buying status to better understand how and when AI tools were used in the car shopping process.
For the Silo, Karen Hayhurst.
About CarEdge Founded in 2019 by father-and-son team Ray and Zach Shefska, CarEdge is a leading platform dedicated to empowering car shoppers with free expert advice, in-depth market insights, and tools to navigate every step of the car-buying journey. From researching vehicles to negotiating deals, CarEdge helps consumers save money, time, and hassle, hundreds of thousands of happy consumers have used CarEdge to buy their car with confidence. With trusted resources like the CarEdge AI Negotiator tool, Research Center, Vehicle Rankings and Reviews, and hundreds of guides on YouTube, CarEdge is redefining transparency and fairness in the automotive industry. Follow them on YouTube, TikTok, X, Facebook, and Instagram for actionable car-buying tips and market insights. Learn more at www.CarEdge.com.
A horse and buggy. Excellent horse-power huh? People got tired of the nurturing it took to take care of a work horse. People wanted more and as with anything the need for something better fuels the spark for innovation. How about something to do work, but doesn’t need rest? Doesn’t need medication? Doesn’t need someone to shovel up its crap? Take this formula and you get the steam engine, not a crazy engine, but an engine none-the-less. Suddenly the glowing aura of potential is perceivable, right on the horizon. Now we can have multiple horse-power without the care. Still needed someone to shovel though.
The Horsey Horseless. Designed to prevent horses being frightened by a car.
Enter, the mother of current automotive technology today, the oil industry.
Instead of burning coal, why not find some ways to refine oil to be used as fuel sources to run things on? Who knows, we could have been running advanced versions of steam engines today? They actually can be made to be fairly efficient and clean using current technology and were quite practical cars back in 1918.
Then the internal combustion engine enters the scene the oil companies love this, and a mass marketed engine that is completely dependent on oil is born. Just think, this is awesome for business, these engines need oil for fuel and lubrication. Then all the different designs start flowing. (Off the top of my head and in no chronological order) The single cylinder, then 2, then 4, then 6, then the flathead V8. Now this is where we start to see major horse-power and design improvements. The trusty ole’ inline 6’s, the small block eating slant 6’s,The overhead valve V engine, big blocks, small blocks, Hemi’s. There are pancake engines, W engines, rotary engines, v-tecs, boxster engines and many, many more. (Not to mention all of the different fuel delivery systems!)
The cylinder and valves and crankshaft of the Internal Combustion Engine
The one thing that really makes me scratch my head is the fact that it took so long getting hybrids, smart-cars, electric cars, and hydrogen cars that are actually worth looking at and driving. I mean, why is it that I can take a full size 2008 Chevrolet Silverado with a 5.3 L vortec engine, put: a cold air intake, a magnaflow exhaust system, and a good edge products programmer, and I can get an average of over 36miles per gallon, with the same horse-power? Why is it that I (not being an automotive engineer) can do this, but you can’t just buy one with those numbers from the manufacturer?
Not to mention brown-gas converters that have been tested on most common engine types that can take, mineral water, and a reaction from current between two electrified plates (similar to a car battery) and create a safe amount of hydrogen gas as a by-product which can make your car run the same on half the amount of fuel. The thing that boggles me is that most people have never even heard of these. You can buy the plans off the internet (not as complicated as it sounds) or I can even get ready to install ones from my performance part supplier. I just find it strange that automotive technology and fuel sources have taken this long to start to veer just slightly away from oil (or as ‘ol Jed calls it “Texas tea”).
At one point we bridged the gap from a horse and buggy to a steam engine, and then to internal combustion. With the technology we have now, we should have much higher mpg’s and horse-power or an extremely viable alternative. It really makes me wonder where we might be now if this technology was steered in a different direction from the start. It’s been over 100 years now of improving the same technology using more or less the same fuel source. There are guys in the States who run their own garage refined deep fryer grease to power their small pickups and VW buses. There are guys who run pickups off wood-fire smoke. Just something to think about.
October, 2025 – Canada has world-class strength in AI research but continues to fall short in widespread adoption, according to a new report from the C.D. Howe Institute. On the heels of the federal government’s announcement of a new AI Strategy Task Force, the report highlights the urgent need to bridge the gap between research excellence and real-world adoption.
In “AI Is Not Rocket Science: Ideas for Achieving Liftoff in Canadian AI Adoption,” Kevin Leyton-Brown, Cinda Heeren, Joanna McGrenere, Raymond Ng, Margo Seltzer, Leonid Sigal, and Michiel van de Panne note that while Canada ranks second globally in top-tier AI researchers and first in the G7 for per capita publications, it is only 20th in AI adoption among OECD countries. “This matters for the economy as a whole, because such knowledge translation is a key vehicle for productivity growth,” the authors say. “It is terrible news, then, that Canada experienced almost no productivity growth in the last decade, compared with a rate 15 times higher in the United States.”
The authors argue that new approaches to knowledge translation are needed because AI is not “rocket science”: instead of focusing on a single industry sector, the discipline develops general-purpose technology that can be applied to almost anything. This makes it harder for Canadian firms to find the right expertise and for academics to sustain ties with industry. Existing approaches – funding academic research, directly subsidizing industry efforts through measures such as SR&ED and superclusters, and promoting partnerships through programs like Mitacs and NSERC Alliance – have not solved the problem.
Four ideas to help firms leverage Canadian academic strength to fuel their AI adoption include: a concierge service to match companies with experts, consulting tied to graduate student scholarships, “research trios” that link AI specialists with domain experts and industry, and a major expansion of AI training from basic literacy to dedicated degrees and continuing education. Drawing on their experiences at the University of British Columbia, the authors show how local initiatives are already bridging gaps between academia and industry – and argue these models should be scaled nationally.
“Canada’s unusual strength in AI research is an enormous asset, but it’s not going to translate into real-world productivity gains unless we find better ways to connect AI researchers and industrial players,” says Kevin Leyton-Brown, professor of computer science at the University of British Columbia and report co-author. “The challenge is not that AI is too complicated – it’s that it touches everything. That means new models of partnership, new incentives, and new approaches to education.”
AI Is Not Rocket Science- 4 Ideas in Detail
Idea 1: A Concierge Service for Matchmaking
We have seen that it is hard for industry partners to know who to contact when they want to learn more about AI. Conversely, it is at least as hard for AI experts to develop a broad enough understanding of the industry landscape to identify applications that would most benefit from their expertise. Given the potential gains to be had from increasing AI adoption across Canadian industry, nobody should be satisfied with the status quo.
We argue that this issue is best addressed by a “concierge service” that industry could contact when seeking AI expertise. While matchmaking would still be challenging for the service itself, it could meet this challenge by employing staff who are trained in eliciting the AI needs of industry partners, who understand enough about AI research to navigate the jargon, and who proactively keep track of the specific expertise of AI researchers across a given jurisdiction. This is specialized work that not everyone could perform! However, many qualified candidates do exist (e.g., PhDs in the mathematical sciences or engineering). Such staff could be funded in a variety of different ways: for example, by an AI institute; a virtual national institute focused on a given application area; a university-level centre like UBC’s Centre for Artificial Intelligence Decision-making and Action (CAIDA); a nonprofit like Mitacs; a provincial ministry for jobs and economic growth; or the new federal ministry of Artificial Intelligence and Digital Innovation.
Having set up an organization that facilitates matchmaking, it could make sense for the same office to provide additional services that speed AI adoption, but that are not core strengths of academics. Some examples include project management, programming, AI-specific skills training and recruitment, and so on. Overall, such an organization could be funded by some combination of direct government support, direct cost recovery, and an overhead model that reinvests revenue from successful projects into new initiatives.
Idea 2: Consultancy in Exchange for Student Scholarships
Many businesses that would benefit from adopting AI do not need custom research projects and do not want to wait a year or more to solve their problems. The lowest-hanging fruit for Canadian AI adoption is ensuring that industry is well informed about potentially useful, off-the-shelf AI technologies. We thus propose a mechanism under which AI experts would provide limited, free consulting to local industry. AI experts would opt in to being on a list of available consultants. A few hours of advice would be free to each company, which would then have the option of co-paying for a limited amount of additional consulting, after which it would pay full freight if both parties wanted to continue. The company would own any intellectual property arising from these conversations, which would thus focus on ideas in the public domain. If the company wanted to access university-owned IP, it could shift to a different arrangement, such as a research contract. This system would work best given a concierge service like the one we just described. The value offered per consulting hour clearly depends on the quality of the academic–industry match, and some kind of vetting system would be needed to ensure the eligibility of industry participants.
Why would an AI expert sign up to give advice to industry? All but the best-funded Canadian faculty working in AI report that obtaining enough funding to support their graduate students is a major stressor. Attempting to establish connections with industry is hard work, and such efforts pay off only if the industry partner signs on the dotted line and matching funds are approved. There is thus space to appeal to faculty with a model in which they “earn” student scholarships for a fixed amount of consulting work. For example, faculty could be offered a one- semester scholarship for every eight hours set aside for meetings with industry, meaning that one weekly “industry office hour” would indefinitely fund two graduate students. Consulting opportunities could also be offered directly to postdoctoral fellows or senior (e.g., post-candidacy) PhD students in exchange for fellowships. In such cases, trainees should be required to pass an interview, certifying that they have both the technical and soft skills necessary to succeed in the consulting role. The concierge service could help decide which industry partners could be routed to PhD students and which need the scarcer consulting slots staffed by faculty members.
The system would offer many benefits. From the industry perspective, it would make it straightforward to get just an hour or two of advice. This might often be enough to allow the company to start taking action towards AI adoption: there is a rich ecosystem of high-performance, reliable, and open-source AI tools; often, the hard part is knowing what tool to use in what way. Beyond the value of the advice itself, consulting meetings offer a strong basis for building relationships between academics and industry representatives, in which the academic plays the role of a useful problem solver rather than of a cold-calling salesperson. These relationships could thus help to incubate Mitacs/Alliance-style projects when research problems of mutual interest emerge (though also see our idea below about how restructuring such projects could help further).
For academics, the system would constitute a new avenue for student funding that would reward each hour spent with a predictable amount of student support. Furthermore, it would offer scaffolded opportunities to deepen connections with industry. The system would come with no reporting requirements beyond logging the time spent on consulting. The faculty member would be free to use earned scholarships to support any student (regardless, for example, of the overlap between the student’s research and the topics of interest to companies), increasing flexibility over the Mitacs/Alliance system, in which specific students work with industry partners. Students who self-funded via consulting would learn valuable skills and would expand their professional networks, improving prospects for post-graduation employment.
Finally, the system would also offer multiple benefits from the government’s perspective. It would generate unusually high levels of industrial impact per dollar spent (consider the number of contact hours between academia and industry achieved per dollar under the funding models mentioned in Section 3). All money would furthermore go towards student training. The system would automatically allocate money where it is most useful, directing student funding to faculty who are both eager to take on students and relevant to industry, all without the overhead of a peer-review process. And it would generate detailed impact reports as a side effect of its operations, since each hour of industry–academia contact would need to be logged to count towards student funding.
Idea 3: Grants for Research Trios
Our third proposal is an approach for expanding the Mitacs/Alliance model to make it work better for AI. Industry–academia partnerships leverage two key kinds of expertise from the academic side: methodological know-how for solving problems and knowledge about the application domain used for formulating such problems in the first place. In fields for which the set of industry partners is relatively small and relatively stable, it makes sense to ask the same academics to develop both kinds of expertise. In very general-purpose domains like AI, it holds back progress to ask AI experts to become domain experts, too. Instead, it makes sense to seek domain knowledge from other academics who already have it. We thus propose a mechanism that would fund “research trios” rather than bilateral research pairings. Each trio would contain an AI expert, an academic domain expert, and an industry partner. This approach capitalizes on the fact that there is a huge pool of academic talent outside core AI with deep disciplinary knowledge and a passion for applying AI. While such researchers are typically not in a position to deeply understand cutting-edge AI methodologies, they are ideally suited to serve as a bridge between researchers focused on AI methodologies and Canadian industrial players seeking to achieve real-world productivity gains. In our experience at UBC, the pool of non-AI domain experts with an interest in applying AI is considerably larger than the pool of AI experts. One advantage of this model is that projects can be initiated by the larger population of domain experts, who are also more likely to have appropriate connections to industry. Beyond this, involving domain experts increases the likelihood that a project will succeed and gives industry partners more reason to trust the process while a solution is being developed. The model meets a growing need for funding researchers outside computer science for projects that involve AI, rather than concentrating AI funding within a group of specialists. At the same time, it avoids the pitfall of encouraging bandwagon-jumping “applied AI” projects that lack adequate grounding in modern AI practices. Finally, it not only transfers AI knowledge to industry, but also does the same to both the domain expert and their students.
Idea 4: Greatly Expanded AI Training
As AI permeates the economy, Canada will face an increasing need for AI expertise. Today, that training comes mostly in the form of computer science degrees. Just as computer science split off from mathematics in the 1960s, AI is emerging today as a discipline distinct from computer science. In part, this shift is taking the form of recognizing that not every AI graduate needs to learn topics that computer science rightly considers part of its core, such as software engineering, operating systems, computer architecture, user interface design, computer graphics, and so on. Conversely, the shift sees new topics as core to the discipline. Most fundamental is machine learning. Dedicated training in AI will require a deeper focus on the mathematical foundations of probability and statistics, building to advanced topics such as deep learning, reinforcement learning, machine learning theory, and so on. Various AI modalities also deserve separate study, such as computer vision, natural language processing, multiagent systems, robotics, and reasoning. Training in ethics, optional in most computer science programs, will become essential.
Beyond dedicated training in the core discipline, we anticipate huge demand for broad-audience AI literacy training; for AI minors to complement other disciplinary specializations; for continuing education and “micro-credential” programs; and for executive education in AI. There is also a growing need for “AI Adoption Facilitators”: bridge-builders who can help established workers in medium-to-large organizations understand how data-driven tools could offer value in solving the problems they face. Training for this role would emphasize business principles and domain expertise, but would also require firmer foundations in machine learning and data science than are currently typical in those disciplines.
I graduated from the University of California at Berkeley about a decade ago with a degree in Mechanical Engineering. I received two job offers, one from SETI to work on high performance signal processing and the other from industry.
One does not simply walk away from SETI, so I had the pleasure of joining the Berkeley SETI Research Center (BSRC). I received a warm welcome and was promptly sent to West Virginia to help install a new SETI system at the Green Bank Telescope.
There was a steep learning curve, but I was fascinated by BSRC’s work and couldn’t wait to actually understand what was going on.
As it turns out, our group is looking to expand its computing power, providing the ability to look at more star systems with habitable planets, expand the involvement of volunteers and acquire larger volumes of data; in short, broaden the search and increase our chances of intercepting a signal. Now I’m working on setting up new servers, network hardware, and signal-processing systems at Green Bank. We’re hoping to get data flowing and recording soon, and make it available for the interested public.
From the 19th-century idea of drawing a giant Pythagorean triangle in the Siberian tundra to signal extraterrestrials, to our current collection of servers storing and analyzing data, it is not hard to see how much progress has already been made.
Running SETI software on your home computer looks like this.
Funding from the Breakthrough Initiatives is spawning new projects that would not have been otherwise possible. SETI@home is planning to work with Breakthrough Listen to collect and distribute data from the Green Bank and Parkes telescopes. However, in order to sustain the whole SETI@home effort we could still use support from our devoted SETI@home contributors.
Recently, I spent a day at the Bay Area Science Festival talking to kids and their adults. I was fascinated by just how stoked kids are about SETI. Some came with prepared questions and showed incredible curiosity and intelligence. The BSRC team is hoping to inspire kids to pursue science careers and I think searching for life beyond Earth is a great way to get them interested and involved. I hope you continue your support for this fascinating endeavor, and keep your eyes on the stars. For the Berkeley SETI Research Center team, Zuhra Abdurashidova.
Supplemental- via nemesis maturity YouTube channel
Wow Signal – Scientists say that if the signal came from extraterrestrials, they are likely to be an extremely advanced civilization, as the signal would have required a 2.2-gigawatt transmitter, vastly more powerful than any on Earth.
The signal bore the expected hallmarks of non-terrestrial and non-Solar System origin.
One summer night in 1977, Jerry Ehman, a volunteer for SETI, or the Search for Extraterrestrial Intelligence, may have become the first man ever to receive an intentional message from an alien world. Ehman was scanning radio waves from deep space, hoping to randomly come across a signal that bore the hallmarks of one that might be sent by intelligent aliens, when he saw his measurements spike.
The signal lasted for 72 seconds, the longest period of time it could possibly be measured by the array that Ehman was using. It was loud and appeared to have been transmitted from a place no human has gone before: in the constellation Sagittarius near a star called Tau Sagittarii, 122 light-years away.
All attempts to locate the signal again have failed, leading to much controversy and mystery about its origins and its meaning.
Considered by many to be The Holy Grail of Polyphonic Synthesis, this meticulously refurbished Oberheim FVS-1 took 88 hours of skilled vintage synth tech time via our friends at tonetweakers to perfect. The FVS-1 contains 4 classic Oberheim SEM modules, each providing a single dual oscillator voice. Sounds are dialed in manually on each module, with global control over the most tweaked parameters via the programmer module, where patches are also saved and recalled. Since each SEM is manually adjusted, it’s hard to get them sounding exactly the same. The result is a much more organic, slightly detuned, richer, truly magical sound than you’d get out of most other poly synths.
Famous users include Lyle Mays, 808 State, Depeche Mode, Styx, Pink Floyd, The Shamen, Gary Wright, Joe Zawinul and John Carpenter (yep the film director of The Thing, Big Trouble in Little China, Starman, Escape From New York and other classics often composed and recorded music for his movies). You won’t find a better example of this beautiful classic synthesizer, so if you’re looking for an exceptional 4 voice, now’s the time. Visit our friends at tonetweakers.com to learn more.
The OB Four Voice contains 4 SEMs and a mixer module. This beautiful instrument can play up to 8 oscillators at once, for insanely humongous sounds.
One of the first
The 4 voice was one of the first polyphonic synths. Each of the four Synthesizer Expander Modules ( SEM ) can be assigned to a different note. Splitting voices between modules is also possible, as is a monophonic unison mode. A single voice is surprisingly powerful, offering 2 oscillators, 2 envelopes (1 for filter, 1 for volume), an LFO, pulse width modulation and a real sweet multimode filter with sweep-able mode (which few synths offered). The programmer module allows fast saving and recall of programmed sounds. With a combined 8 oscillators, these sound unbelievably fat. Even a single SEM sounds great. In unison mode, play all VCOs on one key for one of the most powerful vintage synth sounds ever. Nothing sounds like it to us and we’ve played everything. This is a personal favorite. This FVS-1 has the standard configuration of modules: 4 x Synthesizer Expander Module ( SEM ) Keyboard Output module Polyphonic Keyboard module Programmer module.
No clangs or zaps
If you are an analog synth head who makes musical sounds, you need one of these. To avoid disappointment though, we would recommend anyone looking for a dedicated sound effects machine to go for something else. This 4 voice is fabulous at musical tones and can make some interesting sound effects but there are better choices for clangs, zaps, explosions and similar atonal timbres.
Other famous users include: Joe Zawinul, Chick Corea, Larry Fast (Synergy), Jan Hammer, Herbie Hancock, Human League, Michael McDonald / Doobie Brothers, Patrick Moraz, Steve Porcaro, The Shamen, Tim Simenon, Depeche Mode, Vince Clarke / Erasure, Tangerine Dream, Stevie Wonder and many other influential musicians who could afford one – this was a very expensive instrument when it came out!
Artificial Intelligence (AI) has infiltrated our lives for decades, but since the public launch of ChatGPT showcasing generative AI in 2022, society has faced unprecedented technological evolution.
With digital technology already a constant part of our lives, AI has the potential to alter the way we live, work, and play – but exponentially faster than conventional computers have. With AI comes staggering possibilities for both advancement and threat.
The AI industry creates unique and dangerous opportunities and challenges. AI can do amazing things humans can’t, but in many situations, referred to as the black box problem, experts cannot explain why particular decisions or sources of information are created. These outcomes can, sometimes, be inaccurate because of flawed data, bad decisions or infamous AI hallucinations. There is little regulation or guidance in software and effectively no regulations or guidelines in AI.
How do researchers find a way to build and deploy valuable, trusted AI when there are so many concerns about the technology’s reliability, accuracy and security?
That was the subject of a recent C.D. Howe Institute conference. In my keynote address, I commented that it all comes down to software. Software is already deeply intertwined in our lives, from health, banking, and communications to transportation and entertainment. Along with its benefits, there is huge potential for the disruption and tampering of societal structures: Power grids, airports, hospital systems, private data, trusted sources of information, and more.
Consumers might not incur great consequences if a shopping application goes awry, but our transportation, financial or medical transactions demand rock-solid technology.
The good news is that experts have the knowledge and expertise to build reliable, secure, high-quality software, as demonstrated across Class A medical devices, airplanes, surgical robots, and more. The bad news is this is rarely standard practice.
As a society, we have often tolerated compromised software for the sake of convenience. We trade privacy, security, and reliability for ease of use and corporate profitability. We have come to view software crashes, identity theft, cybersecurity breaches and the spread of misinformation as everyday occurrences. We are so used to these trade-offs with software that most users don’t even realize that reliable, secure solutions are possible.
With the expected potential of AI, creating trusted technology becomes ever more crucial. Allowing unverifiable AI in our frameworks is akin to building skyscrapers on silt. Security and functionality by design trump whack-a-mole retrofitting. Data must be accurate, protected, and used in the way it’s intended.
Striking a balance between security, quality, functionality, and profit is a complex dance. The BlackBerry phone, for example, set a standard for secure, trusted devices. Data was kept private, activities and information were secure, and operations were never hacked. Devices were used and trusted by prime ministers, CEOs and presidents worldwide. The security features it pioneered live on and are widely used in the devices that outcompeted Blackberry.
Innovators have the know-how and expertise to create quality products. But often the drive for profits takes precedence over painstaking design. In the AI universe, however, where issues of data privacy, inaccuracies, generation of harmful content and exposure of vulnerabilities have far-reaching effects, trust is easily lost.
So, how do we build and maintain trust? Educating end-users and leaders is an excellent place to start. They need to be informed enough to demand better, and corporations need to strike a balance between caution and innovation.
Companies can build trust through a strong adherence to safe software practices, education in AI evolution and adherence to evolving regulations. Governments and corporate leaders can keep abreast of how other organizations and countries are enacting policies that support technological evolution, institute accreditation, and financial incentives that support best practices. Across the globe, countries and regions are already developing strategies and laws to encourage responsible use of AI.
Recent years have seen the creation of codes of conduct and regulatory initiatives such as:
The Bletchley Declaration, Nov. 2023, an international agreement to cooperate on the development of safe AI, has been signed by 28 countries;
US President Biden’s 2023 executive order on the safe, secure and trustworthy development and use of AI; and
Governing AI for Humanity, UN Advisory Body Report, September 2024.
We have the expertise to build solid foundations for AI. It’s now up to leaders and corporations to ensure that much-needed practices, guidelines, policies and regulations are in place and followed. It is also up to end-users to demand quality and accountability.
Now is the time to take steps to mitigate AI’s potential perils so we can build the trust that is needed to harness AI’s extraordinary potential. For the Silo, Charles Eagan. Charles Eagan is the former CTO of Blackberry and a technical advisor to AIE Inc.
Living in space has significant effects on the human body. As we prepare for journeys to more distant destinations like Mars, humankind must tackle these risks to ensure safe travel for our astronauts.
With AI reshaping everything from finance to fast food, the $1.5T auto retail industry is finally facing its overdue disruption. The typical car-buying experience—riddled with hidden fees, lead bloat, pricing games and low trust—has remained stubbornly analog. But now, with 90% of dealerships in America (and growing % in Canada and Mexico) experimenting with AI tools and 1 in 4 buyers already using AI to shop, the tide is turning. Agentic AI technology is fundamentally reshaping one of the most significant purchases in a person’s life.
Zach Shefska, Co-Founder and CEO of CarEdge, asserts that agentic AI is the key to rebuilding trust, removing friction and leveling the playing field for both buyers and sellers. From AI-powered shopping assistants that negotiate on your behalf, to data tools that reveal deceptive dealership practices, Shefska is a pioneer in “agentic AI” — a new form of artificial intelligence bringing much-needed transparency to the industry.
The Broken Status Quo: Car buying is frustrating and inefficient for both consumers and dealerships—highlighting key stats like 72% sales staff turnover and 2% lead conversion from third-party platforms.
Lead Generation Platforms Are Failing: Legacy systems flood dealers with unqualified leads, drain resources, and deliver minimal value to consumers.
The Rise of Agentic AI in Auto Retail: Consumers are turning to tools like ChatGPT and CarEdge’s AI agent to navigate purchases with more confidence, speed, and clarity—25% are already doing it.
From Friction to Fluidity: Agentic AI replaces quantity with quality—streamlining the buyer’s journey, reducing information overload, and improving dealer efficiency.
The End of Pricing Games: AI tools now collect and publish out-the-door pricing from thousands of dealerships, exposing hidden fees and rewarding transparent sellers.
The Future of Negotiation: AI agents can negotiate on behalf of both buyers and sellers—minimizing stress, cutting transaction times from days to hours, and removing the adversarial edge.
Real-World Impact Stories: One buyer saved $1,280 and hours of back-and-forth using CarEdge’s agentic AI—illustrating AI’s practical value in real-life scenarios.
AI Helps Honest Dealers Win: In a trust-starved industry, AI gives reputable dealers a new way to stand out by offering full transparency and faster deals.
What’s Next for AI in Auto Retail: The emerging frontier: AI agents dynamically collecting and updating real-time pricing and inventory data across markets to offer true market intelligence.
For the Silo, Zach Shefska. Zach is CEO of CarEdge, a leading platform—founded by father-and-son team Ray and Zach Shefska—dedicated to empowering car shoppers with free expert advice, in-depth market insights and tools to navigate every step of the car-buying journey. From researching vehicles to negotiating deals, CarEdge helps consumers save money, time and hassle. Alsop with trusted resources like the CarEdge Research Center, Vehicle Rankings and Reviews, and hundreds of guides on YouTube, CarEdge is redefining transparency and fairness in the automotive industry. Connect with Shefska at www.CarEdge.com or on social media on YouTube, TikTok, X, Facebook, and Instagram.
Buckminster was a genius and his geodesic dome buildings were not only revolutionary in their construction but were also incredibly unique and memorable. Perhaps your grandparents attended Expo67 in Montreal (you guessed it, waaay back in 1967) and visited the United States Pavilion- read this snippet for a time capsule account:
“The United States exhibit, entitled Creative America, is designed to illustrate technological and esthetic inventiveness in the U.S.A.A huge transparent geodesic “bubble” contains a multi-level system of exhibit platforms interconnected by escalators, and walkways. The platforms support a variety of exhibit components specially selected or designed for the new environment created by the structure. Situated on Ile Sainte-Hélène close to the Métro station from which there is Minirail connection with the Expo-Express, the bubble is 20 stories high and has a spherical diameter of 250 feet .By day, the bubble glistens as the sun highlights the structure and, by night, the bubble “glows” from interior lighting. The interior exhibits reflect different aspects of the United States and include folk art, cinema and fine arts displays, as well as a space exhibit which is reached by a 125 foot escalator and a simulated lunar landscape supporting full scale lunar vehicles. A 300-seat theatre features a 3-screen color film showing the games children play.”
Photo- National Archives of Canada
If you think that was pretty amazing check out some of Buckminster’s buildings that unfortunately didn’t make it past the planning stage.
VENTURI SPACE PRESENTS MONA LUNA, THE EUROPEAN LUNAR ROVERMONA LUNA, designed by Sacha Lakic
Paris Air Show, Le Bourget, June 2025 – Venturi Space unveils MONA LUNA, its 100% European-built lunar rover. Designed to support the ambitions of the European Space Agency and the French CNES, the vehicle will be built at Venturi Space France’s facility in Toulouse. The ultimate aim is to provide Europe with a lunar-capable rover by 2030.
European autonomy in lunar mobility is a major strategic challenge. Venturi Space is helping to make that a reality with MONA LUNA, its upcoming lunar rover designed to meet the needs of ESA and national European space agencies. The vehicle will further Europe’s efforts to achieve technological independence in the field of lunar mobility, enabling it to get ahead of the industrial curve and achieve its space ambitions.
A project led by Venturi Space France Venturi Space France will oversee MONA LUNA’s development and space qualification from its base in Toulouse, coordinating every aspect of the process: onboard electronics, avionics, space-to-ground links, energy management systems, assembly, final integration, and acceptance testing in readiness for space flight. All with one clear objective: to deploy MONA LUNA at the Moon’s South Pole by 2030.
Backed by the ESA and CNES The European Space Agency is supporting Venturi Space’s efforts to design and develop the critical technologies required for a large lunar rover, capable of surviving multiple lunar nights. ESA’s support validates Venturi Space’s approach and highlights its expertise. The project will draw on the experience acquired from the programmes to develop the FLIP and FLEX rovers under a strategic partnership with US-based company Venturi Astrolab, Inc. Venturi Space is currently designing and building the hyper-deformable wheels that will be fitted to those vehicles, along with the associated electrical systems (in Switzerland) and high-performance batteries (in Monaco).
Using technology made in Europe MONA LUNA is designed to be carried into space by the Ariane 6.4 launch system and landed on the Moon’s surface by the European Argonaut lunar lander, while the rover itself will be equipped with a robotic arm to handle scientific instruments and payloads. It will be: – electrically powered, recharging via solar panels, – designed to move autonomously, – equipped with three high-performance batteries, – capable of carrying a wide range of payloads, – designed to survive multiple lunar nights, – capable of a top speed of 20 km/h, – designed to weigh a total of 750 kg.
The rover could also be used in an emergency to carry an astronaut in difficulty, as envisaged by the ESA and CNES in their feasibility studies. A clear commercial purpose MONA LUNA’s maiden mission will focus on purely scientific applications, but future deployments could be organized to meet demand from the European private sector for a variety of purposes, including carrying payloads to the South Pole, exploiting lunar resources (such as helium-3) in situ, or even public outreach campaigns. This approach will help establish a sustainable long-term economic model for the rover, in much the same way as the early development of terrestrial mobility.
Gildo Pastor, President of Venturi Space: “I’m still an explorer, first and foremost. Space is a new frontier, and MONA LUNA is how we are actually going to broach it. Alongside Europe, we aim to build an autonomous lunar exploration capability to meet the scientific, economic, and strategic challenges of tomorrow.”
Dr. Antonio Delfino, Director of Space Affairs at Venturi Space: “Our primary focus is to make ourselves fully available to the ESA and European national space agencies. With MONA LUNA, we aim to deliver major technological breakthroughs that will pave the way for extended lunar mobility.”
Summer, and thus driving season, is currently in full swing for much of Canada. Most of us that have them are trying to drive our classics every chance we get. Here are some vital reminders to heed if your vintage ride gets called up into everyday action.
Where I live is currently in the beautiful pocket of time where the mornings are cool yet bright and the sun only really gets hot in the middle of the afternoon. All of my cars love this weather, and I love driving just that little bit more. So I’m trying to drive as much as I can, and if you are doing the same, here are a handful of reminders for the times your vintage ride gets called up into more routine service.
Before we dive in though, it’s worth mentioning that old cars were once new cars. Someone drove and treated my Chevrolet Corvair the way I currently behave while behind the wheel of my wife’s Jeep Renegade—a daily driver. Traffic 30, 40, or even 90 years ago was radically different than traffic today, and many of our common-sense habits have shifted meaning to the point that what makes total sense for you in your old car will look insane to a common road user. While old cars require an additional amount of care and attention to be used regularly, driving your car is the best thing you can do for it. Don’t be scared of using the car exactly how it was intended.
Old cars have old brakes
Fresh wheel bearings and drums made for a big improvement in drivability and safety on my Model A Ford.Kyle Smith
It’s easy to get lured into driving like those around you, but be careful. Without notice, you’ll find yourself tailgating at the same distance as the modern cars, and when that line of cars taps the brakes, suddenly the concept of 5-mph bumpers doesn’t seem so comical.
Vintage brakes can be made to work very well with a bit of care and attention, but even I have to admit vintage designs and materials just cannot compare to modern brakes—that is before even mentioning driver assist systems like anti-lock braking or emergency braking. Give yourself plenty of room.
Check your fluids often
Kyle Smith
Modern cars have spoiled us with the ability to drive thousands of miles without opening the hood. Regardless of how you feel about the separation between driver and mechanic over time, driving your vintage car on more than just a couple weekends a month requires staying on top of topping of fluids.
Old engines can and often do consume oil at a rate much higher than modern engines. Add in even just a small leak and suddenly the bottom of the dipstick is bone dry and before long, so is the oil pickup. Engine oil also helps cool an engine, so keeping oil topped up helps for multiple reasons beyond just proper lubrication. Also keep an eye on brake fluid and coolant.
Modern car gauges are “normalized,” meaning that they often will be basically stationary while driving despite slight fluctuation in the pressures, temperatures, and levels they monitor. On older cars, a coolant temp gauge might rise slightly when caught at a long stoplight, but it might not actually be a cause for concern. Most automotive engines operate best when coolant temps are between 180 and 210 degrees Fahrenheit. Modern gauges will be stationary for that entire range, but an old-school mechanical gauge will transmit everything. This means coolant temp could drop slightly when you turn on the heater, or increase some with long periods of idling or while an air conditioner is cycling.
Make your escape plans
Smiley N. Pool/Houston Chronicle/Getty Images
Even in great condition, aging cars can and do break down. Think through what common failures might occur with your car and formulate a plan for how you will handle the situation. This can mean packing a tool kit, re-upping your roadside assistance membership, or choosing routes and times of day that will help ensure you have a smooth trip. Some vintage cars will have zero trouble in modern traffic, but if yours tends to overheat or get cranky sitting still, make sure that you scout an escape route, should you get snarled in traffic. Being stuck on the side of the road is infinitely better than being stuck in the middle of the road. Trust me. There are a few roads around town that I avoid in my vintage cars due to the lack of shoulder or safe place to veer off. Paranoid? Maybe a little, but I don’t want to get hit while sitting on the side of the road.
Be aware of your tires
Andrew Ganz
Modern tires are downright amazing and often go underappreciated. Since vintage cars get less mileage than their modern counterparts, a lot more people are willing to drive on older or poor-condition tires, sometimes out of pure ignorance or from lack of inspection. Tread depth and age are big considerations, but if you’re running modern reproductions of older tire designs, there is also the way those tires handle water. Siping and water control have a huge impact on handling and braking. You might have brand-new tires, but if the design is 50 years old, they are going to handle that way. Again, not a bad thing, but something to adjust to. For the Silo, Kyle Smith.
Why are so many people still struggling with brain fog, chronic fatigue, low energy, impaired memory, diminished focus, high stress and ADHD symptoms—even despite years of trying treatments for many? Neurotologist Dr. Kendal Stewart believes it’s because we’re too often treating symptoms, not causes. He’s spent the last 25 years addressing that with science-backed ways to help people at every age improve how they feel and function, both immediately and long-term. As an authority in everyday brain health, Dr. Kendal Stewart helps individuals optimize focus, memory, resilience and other brain health concerns by transforming complex neurological science into simple, actionable lifestyle-based strategies.
Dr. Stewart has spoken at length and has written many editorials to discuss real-world habits, tactics and solutions to reduce brain fog, feel more energized, support focus, maintain emotional balance, and preserve cognitive health as one ages. A variety of related topics that include:
What brain fog, anxiety, and immune dysfunction have in common—and how to address all three holistically
Fueling your brain and immune system based on your unique DNA
Actionable daily habits to support brain and immune system health
How your genetics dictate your brain & immune health–and how to decode it
Why the future of medicine is personalized and already here
What are neuroimmune disorders, and why are we seeing a rise in conditions like chronic pain?
How genetic testing removes guesswork in treating complex neurological conditions
Hope for the undiagnosed: Dr. Stewart’s approach to finding the ‘source’ when other treatments fail
Why we’re still getting brain fog wrong—and what to do instead
A neurotologist’s take on impaired memory, focus, stress and fatigue: stop treating just the symptoms
Easy ways to support your brain & immune system every day
Does your DNA hold the key to focus, energy & emotional balance?
Genetics meets neuroscience for personalized brain health
What distinguishes Dr. Stewart?
Medical Maverick: One of the few specialists bridging neurotology (brain-ear balance) and neuroimmune genetics to treat complex disorders.
DNA-Driven Results: Nearly every patient receives genetic testing to eliminate guesswork—a game-changer for conditions like autism, chronic fatigue, and concussions.
Science Meets Storytelling: From IV therapies to nutrigenomics, he translates cutting-edge science into actionable steps for families and high performers.
Media-Ready: A charismatic speaker and podcast personality with patented tech, a supplement line (Neurobiologix), and a mission to “recover” patients, not just manage symptoms.
About the Expert Dr. Stewart is a board-certified neurotologist and nationally recognized expert in neuroimmune disorders—including genetic abnormalities, chronic pain, ADD/ADHD and autism spectrum conditions. With advanced training in both surgery and cellular science, he’s made it his life’s work to uncover root causes and tailor individualized solutions through genetic testing, functional medicine, and integrative care. His approach emphasizes prevention as much as treatment, using lifestyle, nutrition, and nervous system support as daily fuel for better brain function. Through his work, he provides practical tools to regulate stress, stabilize energy and boost mental clarity.
He’s also a sought-after keynote speaker, inventor, and founder of multiple healthcare innovations, including GX Sciences, SensoryView, and Neurobiologix—a company dedicated to improving individual well-being by developing cutting-edge nutritional supplements rooted in the science of nutrigenomics. Dr. Stewart brings not only clinical authority but also an empowering, real-world lens—helping families, patients, and professionals better understand and improve nervous system and immune function. For the Silo, Karen Hayhurst.
Just over two decades ago in a small theater in Yelm, Washington a little film called What The Bleep Do We Know?!? screened to its first audiences and the term “I Create My Reality” was thrust into the collective consciousness. One of the themes explored is the idea that individuals have the power to create their own reality through their thoughts and intentions. This concept is illustrated through Amanda’s experiences and supported by discussions on the nature of consciousness and its potential influence on the physical world.
Since then countless films and books have extolled the wonders of quantum physics and how understanding the nature of reality could change your life, often in just 3 easy steps. I too thought it was easy, heck I made a movie about it! And for a while it was easy, until I realized that I had only scratched the surface of what “it” all means.
For sure, at a party I could rattle off the wonders of quantum this and quantum that, I could throw around words like entanglement and heady concepts like The Copenhagen Theory, I could wow you with the double slit like nobody’s business. But the truth was, it was really all just smoke and mirrors.
What did understanding quantum physics have to do with my happiness?
What did understanding the workings of the brain mean to my life, in reality, at least this reality, the one where I have kids and bills to pay? I mean it’s fun to dream about other dimensions and my life as electron popping in and out, but in the end I felt as though it was becoming mental masturbation an easy way to escape from the fact that even though I knew I wasn’t really touching that chair, that it is possible I wasn’t even real.
Betsy was one of the three filmmakers (along with Willliam Arntz and Mark Vicente) of What the Bleep Do We Know !?
What I was truly seeking was not the facts about how that chair manifested itself into my reality, but how I could be happy whether I had that chair or not.
Happiness has nothing to do with quarks and the discovery of the Higgs Boson was not going to bring me ever-lasting peace and joy. That I was going to have to find all on my own.
I began to explore the sacred cows, not only in my life, my beliefs about who I was and what I wanted, but also the sacred cows of spirituality, new thought and yes, quantum physics and how I could take all this knowledge and use it to create the happiness I sought, because after all, that is what we are all after. It is why we ask “why?”. It is why we explore the deepest depths of the quantum foam and so far quantum physics hasn’t found the happiness particle, because it doesn’t exist within the particles out there, it exists within the immeasurable particles within me.
Sometimes great things can come from unexpected places. When our friends at kommandostore.com were hit up by an Italian scuba diving company for CBRN-Rated Gas Masks a few years back, they were very intrigued. Mestel Safety, under ‘Ocean Reef Group’, makes the “SGE 400-3” — a gas mask that thinks completely outside the box — a favorite all-rounder on the gas mask market.
SEE THINGS CLEARER.
As usual, kommandostore will be offering the full suite of masks (a CBRN-approved and non-CBRN approved version*), filters, and eyeglass inserts.*More on that later below
A look under the sea – how military scuba diving had an important impact on the design of this unorthodox gas mask…
UNDER-WATER ORIGINS
Ocean Reef Group, Mestel Safety’s parent company, actually specializes in all kinds of equipment for undersea exploration. AndiIt all started with rubber — Giorgio, Ruggero, and Gianni Gamberini worked at a tire repair shop in Genoa, Italy. During their experimentation with rubber compounds at the time, they were approached by pioneer of scuba diving and legend of the Italian Navy, Luigi Ferraro. He wanted to make rubber masks and fins for scuba diving based off his experience. From the successful designs that resulted, a sprawling Italian scuba industry was born.
Commander Luigi Ferraro pictured in his diving gear. He was part of the “Gamma” sapper group, who performed some of the first major underwater stealth operations in WWII with the aid of very-early SCBA equipment. He would go on to sink 3 enemy ships by himself during a long sabotage operation, becoming one of the few people to have received Italy’s highest Naval honor (the Gold Medal) and live to tell the tale. The gif shows examples of some of the equipment he really used, including a damaged Panerai dive watch, and the aforementioned scuba fins. Quite the backstory.
But like all good materials sciences, one of its breakthroughs resulted from a mistake. An “Incorrect” mix of rubber ended up also being the first buoyant rubber compound, incredibly important in the making of flippers.The Gamberini brothers would also pioneer some of the first rubber watch straps, which were a massive upgrade in comfort & security in comparison to leather straps that would degrade in the salty depths.
This is about as good as scuba gear got in the 50s and 60s. On this gentleman’s left hand, you can see his dive watch with a stainless steel wrist strap. While still incredibly popular today even amongst avid scuba divers, they weren’t ideal for military use due to their reflectivity.
Their company Ocean Reef would go on to pioneer the design of the first ever full-face mask for snorkeling use. It featured an almost entirely transparent facepiece with an incredible field of view, which would “float” in front of the rubber that sealed to your face, reducing felt weight. Sounds like these would be great features on a gas mask, eh? They had the same feeling too…
“Mestel Safety”, their medical & safety division, would use everything they learned with their pedigree in undersea engineering, and the very gas mask we’re presenting today would be born. From the depths of the Mediterranean to a position of respect in military & civil applications, Ocean Reef has come a long way, and they definitely earned their spot amongst the best.
COMBAT CAPABILITY
Don’t be spooked by the unconventional design — these masks are tough as nails.Mestel safety tested their masks by barraging the facepiece with, quote, “6.35mm steel spheres going over 300 mph”. For some reason the specificity makes it sound hilarious, but that’s practically like being shot directly in the face with a BB gun over and over and shrugging it off – not bad one bit. So, rest assured, this thing can probably handle some projectiles from common workshop incidents and Airsoft matches.
Probably its most visually obvious feature is, once again, the insane Field of View.It preserves nearly 90% of your vision without significant “warping” and makes it pretty usable with firearms like many mil-contract masks on the market. But when you put on the average military mask, you’ll be stunned at how much you can’t see in comparison.
Having a massive split in the mask reduces the ocular overlap for your eyes and does, in fact, impede your vision right away. It’s why masks like the Avon M50 feature a single unified eyepiece instead of the classic two-piece styled masks of the cold war.
Lastly, these are comfortable to wear over very long durations thanks to the “floating” facepiece design. It allows the rubber to seal perfectly to the shape of your face, and takes the “felt weight” off of your face and onto the harness, where it should be.
We could go on about the cool factor of this mask for a lot longer but if you want to take a closer look at the mask you should investigate the product pages 👇
KNOW THE DIFFERENCE!
An important side note on “CBRN” capability: If you’re looking for the model with 90% of the capability at a reduced price, the silicone-rubber based model is what you’re going to want to pick up. So what’s that other 10%? We’ll keep it simple: the butylated rubber, or just “butyl rubber” adds the ‘R’ and ‘N’ protections to CBRN, (Chemical, Biological, Radiological, Nuclear) *. *If you’re actually planning on dealing with those extra threats or the ‘blister agents’ that can also bypass a silicone seal, you’re going to need way, way more than just the mask to protect yourself anyways. Think a full HAZMAT suit with chemical tape, gloves, booties. And that’d only be for an hour or two of exposure to some of these more deadly agents. In addition to having the right equipment, the best plan is to simply GTFO.
The TL;DR is that this mask will cover you (literally) in most other incidents where a civilian might want full-face protection, from civil unrest to forest-fire evacuation, and of course common household projects.It’s simple: pick up the ‘BB’ model if you are interested in having the full ‘CBRN’ capability at the cost of slightly reduced comfort.And remember: A gas mask is only as good as the filter you’re breathing through, and we have a plethora of information about the excellent filters we’re also stocking from Mestel.
Another cool feature: there’s 3 different positions for filters to be placed to your heart’s desire.
One other note: the lack of ‘NIOSH’ approval for these masks is a bit misleading. Since these are European-made masks, they fall under ‘CE’ standards, which work a bit differently than NIOSH approval. An explanation of these standards can be found on kommandostore’s product page.
Whether this is your first serious use gas mask with actual pedigree or you’re looking for an affordable alternative to the mil-contract priced (expensive) masks, we’re confident that the SGE 400/3 will be the baby bear’s porridge. Once again, take a look at the product pages — you’ll find everything from sizing info to a free copy of the user’s manual if you’d like to read up.
I started out creating sound experiments while in high school, circa 1980 with circuit bent hardware and a cheap Casio keyboard.
I then entered the working world and forgot all about making music. Fast forward 30+ years, and the itch to make experimental music overtook me again, but now technology had changed drastically. I no longer needed hardware. I discovered apps on my iPhone, and music platforms like SoundCloud and Bandcamp were all that I needed. I was immediately obsessed.
Within a couple years, I had filled over seven free SoundCloud accounts, and two Bandcamp albums as well as an artist page with experimental music, and having a great time doing it. But, I started to grow tired of using the same software.
I yearned to use hardware/instruments again, but not being able to play an instrument is a definite hindrance 🙂 I searched for cheap keyboards on the net. I soon discovered the “Stylophone” and ordered one ‘sight unseen’. It was unique, inexpensive and fun, but quite limited in sound variety. I started mixing the Stylophone with app produced sounds/music, as well as other “found sounds”. (I really appreciate the functionality of software based mixing apps, which are almost essential to my creations these days). I then stumbled upon a couple of user videos of the Hyve synthesizer, and knew I had to have it. It was clearly non-musician friendly (and looked so different, cool and fun).
Then came the disappointment …
You can’t buy one! (BUT I HAD TO HAVE ONE!!!) Turns out, the engineer/designer guru behind this awesome device (Skot Wiedmann), had (Hard to believe but it’s been almost a decade since I made this trip!) work shops in the Chicago area, and you can go build your own, ( very inexpensively ). I knew what I had to do. I looked at a map, saw that Chicago was about 8 hours away from me here in Ontario, Canada and realized that I had to go build it. I started to plan the trip. I knew that a fellow SoundCloud musician and Facebook friend (Leslie Rollins) lived in Berrien Springs, Michigan, about 2 hours outside of Chicago.
This presented a twofold opportunity. I could hopefully, meet Leslie face to face, and hopefully have a place to spend the night. I contacted Les and everything was A-OK! I purchased a ticket to build my Hyve, and started to plan my road trip. The workshop was going to be from Noon to 3pm, on a Saturday in late September in a cool space called Lost Arts in Chicago.
I had the whole week off from work, because I was overseeing a contractor doing extensive yard work at my house all week, and I was hoping to leave Friday so as to arrive at Leslie’s place in the late afternoon or early evening, spend the night, and leave for the workshop Saturday morning. Alas, plans rarely work as hoped.
The contractor wasn’t finished until Friday afternoon, and Les wasn’t getting home from a business trip until late Friday night. New plan! Early to bed Friday. Early to rise Saturday (2:30 am), and depart for Leslie’s place in Michigan. It was an easy drive, and I got to Berrien Springs (a beautiful sleepy little university village) around 8:30 am. Met Leslie, and got to trade stories over a great breakfast in a local cafe. Then, I quickly admired Leslie’s impressive modular synth racks at his home studio “Convolution Atelier” and then left for “Lost Arts” in Chicago.
Lost Arts is located in a cool old industrial complex.
The workshop provided everyone with a surface mount board with the touchpad on one side, and components layout on the back. A sheet listing components and placement was also handed out, along with tiny plastic tweezers. Everyone then had their component side “pasted” with a solder paste applied through a pierced template, in a process similar to silk screening. Everyone then started to receive their very tiny components from the parts list. Following the placement locations, the components (chips, capacitors, resistors, etc) were set into their pasted areas with the tweezers (magnification and extra lighting was a must). Once all the components were placed, they were carefully “soldered” into place by simply holding a heat gun over each component until the solder on the board had adhered it. Once this was done, everyone had their 9v battery and line-out jacks hand soldered into place by Skot , and then … the moment of truth, Skot tested each one for proper operation.
It was a fascinating process and great experience.
I met a lot of cool people at the workshop, both builders and staff/helpers! I can’t say enough what a fantastic experience this was, and what an awesome, diverse and versatile device the Hyve is. I doubted my sanity when planning this trip, but it turned out to be very rewarding!
Leslie and I then went back to Michigan, stopped at a local brewery in Berrien Springs (Cultivate) and sampled a few of their excellent brews, and then proceeded to Convolution Atelier to play with Leslie’s modular system. (I’m a newbie to all things modular, and I received a great crash course from Leslie on his very cool array!) Then it was out to dinner with Leslie and his wonderful wife Lisa, and finally back to their house where I stayed for the night, and finally hit the road towards home the next morning. It truly was a great adventure! For the Silo, Mike Fuchs.
Gravitational action at a distance is non-Newtonian and independent of mass, but is proportional to intrinsic energy, distance, and time. Electrical action at a distance is proportional to intrinsic energy, distance, and time.
The conventional assumption that all energy is kinetic and proportional to velocity and mass has resulted in an absence of mechanisms to explain important phenomena such as stellar rotation curves, mass increase with increase in velocity, constant photon velocity, and the levitation and suspension of superconducting disks.
In addition, there is no explanation for the existence of the fine structure constant, no explanation for the value of the proton-electron mass ratio, no method to derive the spectral series of atoms larger than hydrogen, and no definitive proof or disproof of cosmic inflation.
All of the above issues are resolved by the existence of intrinsic energy.
Table of contents
Part One “Gravitation and the fine structure constant” derives the fine structure constant, the proton-electron mass ratio, and the mechanisms of non-Newtonian gravitation including the precession rate of mercury’s perihelion and stellar rotation curves.
Part Two “Structure and chirality” describes the structure of particles and the chirality meshing interactions that mediate action at a distance between particles and gravitons (gravitation) and particles and quantons (electromagnetism) and describes the properties of photons (with the mechanism of diffraction and constant photon velocity).
Part Three “Nuclear magnetic resonance” is a general derivation of the gyromagnetic ratios and nuclear magnetic moments of isotopes.
Part Four “Particle acceleration” derives the mechanism for the increase in mass (and mass-energy) in particle acceleration.
Part Five “Atomic Spectra” reformulates the Rydberg equations for the spectral series of hydrogen, derives the spectral series of helium, lithium, beryllium, and boron, and explains the process to build a table of the spectral series for any elemental atom.
Part Six “Cosmology” disproves cosmic inflation.
Part Seven “Magnetic levitation and suspension” quantitatively explains the levitation of pyrolytic carbon, and the levitation, suspension and pinning of superconducting disks.
Part One
Gravitation and the fine structure constant
“That gravity should be innate inherent & essential to matter so that one body may act upon another at a distance through a vacuum without the mediation of anything else by & through which their action or force may be conveyed from one to another is to me so great an absurdity that I believe no man who has … any competent faculty of thinking can ever fall into it.”1
Intrinsic energy is independent of mass and velocity. Intrinsic energy is the inherent energy of particles such as the proton and electron. Neutrons are composite particles composed of protons, electrons, and binding energy. Atoms, composed of protons, neutrons, and electrons, are the substance of larger three-dimensional physical entities, from molecules to galaxies.
Gravitation, electromagnetism, and other action at a distance phenomenon are mediated by gravitons, quantons and neutrinos. Gravitons, quantons and neutrinos are quanta that have a discrete amount of intrinsic energy and are emitted by particles in one direction at a time and absorbed by particles from one direction at a time. Emission-absorption events can be chirality meshing interactions that produce accelerations or achiral interactions that do not produce accelerations. Chirality meshing absorption of gravitons produces attractive accelerations, chirality meshing absorption of quantons produces either attractive or repulsive accelerations, and achiral absorption of neutrinos do not produce accelerations. The word neutrino is burdened with non-physical associations thus achiral quanta are henceforth called neutral flux.
A single chirality meshing interaction produces a deflection (a change in position), but a series of chirality meshing interactions produces acceleration (serial deflections). A single deflection in the direction of existing motion produces a small finite positive acceleration (and inertia) and a single deflection in the direction opposite existing motion produces a small finite negative acceleration (and inertia).
There are two fundamental differences between the mechanisms of Newtonian gravitation and discrete gravitation. The first is the Newtonian probability two particles will gravitationally interact is 100% but the discrete probability two particles will gravitationally interact is significantly less. The second difference is the treatment of force. In Newtonian physics a gravitational force between objects always exists, the force is infinitesimal and continuous, and the strength of the force is inversely proportional to the square of the separation distance. In discrete physics the existence of a gravitational force is dependent on the orientations of the particles of which objects are composed, the force is discrete and discontinuous, and the number of interactions is inversely proportional to the square of the separation distance. While there are considerable differences in mechanisms, in many phenomena the solutions of Newtonian and discrete gravitational equations are nearly identical.
There are similar fundamental differences between mechanisms of electromagnetic phenomena and in many cases the solutions of infinitesimal and discrete equations are nearly identical.
A particle emits gravitons and quantons at a rate proportional to particle intrinsic energy. A particle absorbs gravitons and quantons, subject to availability, at a maximum rate proportional to particle intrinsic energy. Each graviton or quanton emission event reduces the intrinsic energy of the particle and each graviton or quanton absorption event increases the intrinsic energy of the particle. Because graviton and quanton emission events continually occur but graviton and quanton absorption events are dependent on availability, these mechanisms collectively reduce the intrinsic energy of particles.
Only particles in nuclear reactions or undergoing radioactive disintegration emit neutral flux but in the solar system all particles absorb all available neutral flux.
In the solar system, discrete gravitational interactions mediate orbital phenomena and, for objects in a stable orbit the intrinsic energy loss due to the emission-absorption of gravitons is balanced by the absorption of intrinsic energy in the form of solar neutral flux.
Within the solar system, particle absorption of solar neutral flux (passing through a unit area of a spherical shell centered on the sun) adds intrinsic energy at a rate proportional to the inverse square of orbital distance, and over a relatively short period of time, the graviton, quanton, and neutral flux emission-absorption processes achieve Stable Balance resulting in constant intrinsic energy for particles of the same type at the same orbital distance, with particle intrinsic energies higher the closer to the sun and lower the further from the sun.
The process of Stable Balance is bidirectional.
If a high energy body consisting of high energy particles is captured by the solar gravitational field and enters into solar orbit at the orbital distance of earth, the higher particle intrinsic energies will result in an excess of intrinsic energy emissions compared to intrinsic energy absorptions at that orbital distance, and the intrinsic energy of the body will be reduced to bring it into Stable Balance.
If, on the other hand, a low energy body consisting of low energy particles is captured by the solar gravitational field and enters into solar orbit at the orbital distance of earth, the lower particle intrinsic energies will result in an excess of intrinsic energy absorptions at that orbital distance compared to the intrinsic energy emissions, and the intrinsic energy of the body will be increased to bring it into Stable Balance.
In an ideal two-body earth-sun system, a spherical and randomly symmetrical earth is in Stable Balance orbit about a spherical and randomly symmetrical sun. A randomly symmetrical body is composed of particles that collectively emit an equal intensity of gravitons (graviton flux) through a unit area on a spherical shell centered on the emitting body.
Unless otherwise stipulated, in this document references to the earth or sun assume they are part of an ideal two-body earth-sun system.
The gravitational intrinsic energy of earth is proportional to the gravitational intrinsic energy of the sun because total emissions of solar gravitons are proportional to the number of gravitons passing into or through earth as it continuously moves on a spherical shell centered on the sun (and also proportional to the volume of the spherical earth, to the cross-sectional area of the earth, to the diameter of the earth and to the radius of the earth).
Likewise, because the sun and the earth orbit about their mutual barycenter, the gravitational intrinsic energy of the sun is proportional to the gravitational intrinsic energy of the earth because total emissions of earthly gravitons are proportional to the number of gravitons passing into or through the sun as it continuously moves on a spherical shell centered on the earth (and also proportional to the volume of the spherical sun, to the cross-sectional area of the sun, to the diameter of the sun and to the radius of the sun).
We define the orbital distance of earth equal to 15E10 meters and note earth’s orbit in an ideal two-body system is circular. If additional planets are introduced, earth’s orbit will become elliptical and the diameter of earth’s former circular orbit will be equal to the semi-major axis of the elliptical orbit.
We define the intrinsic photon velocity c equal to 3E8 m/s and equal in amplitude to the intrinsic constant Theta which is non-denominated. We further define the elapsed time for a photon to travel 15E10 meters equal to 500 seconds.
The non-denominated intrinsic constant Psi, 1E-7, is equal in amplitude to the intrinsic magnetic constant denominated in units of Henry per meter.
Psi is also equal in amplitude to the 2014 CODATA vacuum magnetic permeability divided by 4 (after 2014 CODATA values for permittivity and permeability are defined and no longer reconciled to the speed of light); half the electromagnetic force (units of Newton) between two straight ideal (constant diameter and homogeneous composition) parallel conductors with center-to-center distance of one meter and each carrying a current of one Ampere; and to the intrinsic voltage of a magnetically induced minimum amplitude current loop (3E8 electrons per second).
The intrinsic electric constant, the inverse of the product of the intrinsic magnetic constant and the square of the intrinsic photon velocity, is equal to the inverse of 9E9 and denominated in units of Farad per meter.
The Newtonian mass of earth, denominated in units of kilogram, is equal to 6E24, and equal in amplitude to the active gravitational mass of earth, denominated in units of Einstein (the unit of intrinsic energy).
The active gravitational mass is proportional to the number of gravitons emitted and the Newtonian mass is proportional to the number of gravitons absorbed. Every graviton absorbed contributes to the acceleration and inertia of the absorber, therefore the Newtonian mass is also the inertial mass.
We define the radius of earth, the square root of the ratio of the Newtonian inertial mass of earth divided by orbital distance, or the square root of the ratio of the active gravitational mass of earth divided by its orbital distance, equal to the square root of 4E13, 6.325E6, about 0.993 the NASA volumetric radius of 6.371E6. Our somewhat smaller earth has a slightly higher density and a local gravitational constant equal to 10 m/s2 at any point on its perfectly spherical surface.
We define the Gravitational constant at the orbital distance of earth, the ratio of the local gravitational constant of earth divided by its orbital distance, equal to the inverse of 15E9.
The unit kilogram is equal to the mass of 6E26 protons at the orbital distance of earth, and the proton mass equal to the inverse of 6E26.
The proton intrinsic energy at the orbital distance of earth is equal to the inverse of the product of the proton mass and the mass-energy factor delta (equal to 100). Within the solar system, the proton intrinsic energy increases at orbital distances closer to the sun and decreases at orbital distances further from the sun. Changes in proton intrinsic energy are proportional to the inverse square of orbital distance.
The Newtonian mass of the sun, denominated in units of kilogram, is equal to 2E30, and equal in amplitude to the active gravitational mass of the sun, denominated in units of Einstein.
The active gravitational mass is proportional to the number of gravitons emitted and the Newtonian mass is proportional to the number of gravitons absorbed. Every graviton absorbed contributes to the acceleration and inertia of the absorber, therefore the Newtonian mass is also the inertial mass.
The active gravitational mass of earth divided by the active gravitational mass of the sun is equal to the intrinsic constant Beta-square and its square root is equal to the intrinsic constant Beta.
The charge intrinsic energy ei, denominated in units of intrinsic Volt, is proportional to the number of quantons emitted by an electron or proton. The charge intrinsic energy is equal to Beta divided by Theta-square, the inverse of the square root of 27E38.
Intrinsic voltage does not dissipate kinetic energy.
The electron intrinsic energy Ee, equal to the ratio of Beta-square divided by Theta-cube, the ratio of Psi-square divided by Theta-square, the product of the square of the charge intrinsic energy and Theta, and the ratio of the intrinsic electron magnetic flux quantum divided by the intrinsic Josephson constant, is denominated in units of Einstein.
The intrinsic electron magnetic flux quantum, equal to the square root of the electron intrinsic energy, is denominated in units of intrinsic Volt second.
The intrinsic Josephson constant, equal to the inverse of the square root of the electron intrinsic energy, the ratio of Theta divided by Psi and the ratio of the photon velocity divided by the intrinsic sustaining voltage of a minimum amplitude superconducting current, is denominated in units of Hertz per intrinsic Volt.
The discrete (dissipative kinetic) electron magnetic flux quantum, equal to the product of 2π and the intrinsic electron magnetic flux quantum, is denominated in units of discrete Volt second, and the discrete rotational Josephson constant, equal to the intrinsic Josephson constant divided by 2π and the inverse of the discrete electron magnetic flux quantum, is denominated in units of Hertz per discrete Volt. These constants are expressions of rotational frequencies.
We define the electron amplitude equal to 1. The proton amplitude is equal to the ratio of the proton intrinsic energy divided by the electron intrinsic energy.
We define the Coulomb, ec, equal to the product of the charge intrinsic energy and the square root of the proton amplitude divided by two. The Coulomb denominates dissipative current.
We define the Faraday equal to 1E5, and the Avogadro constant equal to the Faraday divided by the Coulomb.
Lambda-bar, the quantum of particle intrinsic energy, equal to the intrinsic energy content of a graviton or quanton, is the ratio of the product of Psi and Beta divided by Theta-cube, the ratio of Psi-cube divided by the product of Beta and Theta-square, the product of the charge intrinsic energy and the intrinsic electron magnetic flux quantum, and the charge intrinsic energy divided by the intrinsic Josephson constant.
CODATA physical constants that are defined as exact have an uncertainty of 10-12 decimal places therefore the exactness of Newtonian infinitesimal calculations is of a similar order of magnitude. We assert that Lambda-bar and proportional physical constants are discretely exact (equivalent to Newtonian infinitesimal calculations) because discretely exact physical properties can be exactly expressed to greater accuracy than can be measured in the laboratory.
All intrinsic physical constants and intrinsic properties are discretely rational. The ratio of two positive integers is a discretely rational number.
The ratio of two discretely rational numbers is discretely rational.
The rational power or rational root of a discretely rational number is discretely rational.
The difference or sum of discretely rational numbers is discretely rational. This property is important in the derivation of atomic spectra where it serves the same purpose as a Fourier transform in infinitesimal mathematics.
The intrinsic electron gyromagnetic ratio, equal to the ratio of the cube of the charge intrinsic energy divided by Lambda-bar square, is denominated in units of Hertz per Tesla.
The intrinsic proton gyromagnetic ratio, equal to the ratio the intrinsic electron gyromagnetic ratio divided by the square root of the cube of the proton amplitude divided by two and the ratio of eight times the photon velocity divided by nine, is denominated in units of Hertz per Tesla.
The intrinsic conductance quantum, equal to the product of the intrinsic Josephson constant and the discrete Coulomb, is denominated in units of intrinsic Siemen.
The kinetic conductance quantum, equal to the intrinsic conductance quantum divided by 2π, is denominated in units of kinetic Siemen.
The CODATA conductance quantum is equal to 7.748091E-5.
The intrinsic resistance quantum, equal to the inverse of the intrinsic conductance quantum, is denominated in units of Ohm.
The kinetic resistance quantum, equal to the inverse of the kinetic conductance quantum, is denominated in units of Ohm.
The CODATA resistance quantum is equal to 1.290640E4.
The intrinsic von Klitzing constant, equal to the ratio of the discrete Planck constant divided by the square of the intrinsic electric constant, is denominated in units of Ohm.
The kinetic von Klitzing constant, equal to the ratio of the discrete Planck constant divided by the square of the discrete Coulomb, is denominated in units of Ohm.
The CODATA von Klitzing constant is equal to 2.581280745E4.
In Newtonian physics the probability particles at a distance will interact is 100% but in discrete physics a certain granularity is needed for interactions to occur.
A particle G-axis is a single-ended hollow cylinder. The mechanism of the G-axis is analogous to a piston which moves up and down at a frequency proportional to particle intrinsic energy. At the end of the up-stroke a single graviton is emitted and during a down-stroke the absorption window is open until the end of the downstroke or the absorption of a single graviton.
The difference (the intrinsic granularity) between the inside diameter of the hollow cylindrical G-axis and the outside diameter of the graviton allows absorption of incoming gravitons at angles that can deviate from normal (straight down the center) by plus or minus 20 arcseconds.
There are three kinds of intrinsic granularity: the intrinsic granularity in phenomena mediated by the absorption of gravitons and quantons; the intrinsic granularity in phenomena mediated by the emission of gravitons and quantons; and the intrinsic granularity in certain electromagnetic phenomena.
The intrinsic granularity in phenomena mediated by the absorption of gravitons or quantons by particles in tangible objects (with kilogram mass greater than one microgram or 1E20 particles) is discretely infinite therefore the average value of 20 arcseconds is discretely exact.
The intrinsic granularity in phenomena mediated by the emission of gravitons or quantons by particles is 20 arcseconds because gravitons and quantons emitted in the direction in which the emitting axis is pointing have an intrinsic granularity of not more than plus or minus 10 arcseconds.
The intrinsic granularity of certain electromagnetic phenomena, in particular a Faraday disk generator, governed by a “Lorentz force” that causes the velocity of an electron to be at a right angle to the force also causes an additional directional change of 20 arcseconds in the azimuthal direction.
In the above diagram, the intrinsic granularity of graviton absorption is illustrated on the left.
Above center illustrates the aberration between the visible and the actual positions of the sun with respect to an observer on earth as the sun moves across the sky. Position A is the visible position of the sun, position B is the actual position of the sun, position B will be the visible position of the sun in 500 seconds, and position C will be the actual position of the sun in 500 seconds. The elapsed time between successive positions is proportional to the separation distance, but 20 arcseconds of aberration is independent of separation distance.
Above right illustrates the six directions within a Cartesian space and the six possible forms describing the six possible facing directions in which a vector can point. A vector pointing up the G-axis of particle A in the facing direction of particle B has one and only one of the six possible forms. The probability a gravitational interaction will occur, if the vector is facing in one of the other five facing directions, is zero. Therefore, a gravitational interaction involving a graviton emitted by a specific particle A and absorbed by a specific particle B is possible (not probable) in only one-sixth the total volume of Cartesian space.
We define the intrinsic steric factor equal to 6. The intrinsic steric factor is inversely proportional to the probability a specific gravitational intrinsic energy interaction can occur on a scale where the probability a Newtonian gravitational interaction will occur is 100%.
The intrinsic steric factor points outward from a specific particle located at the origin of a Cartesian space facing outward into the surrounding space. The intrinsic steric factor applies to action at a distance in phenomena mediated by gravitons and quantons.
To convert 20 arcseconds of intrinsic granularity into an inverse possibility, divide the 1,296,000 arcseconds in 360 degrees by the product of 20 arcseconds and the intrinsic steric factor.
A possibility is not the same as a probability. The possibility two particles can gravitationally interact (each with the other) is equal to 1 out of 10,800. The probability two particles will gravitationally interact (each with the other) is dependent on the geometry of the interaction.
Because Newtonian gravitational interactions are proportional to the quantum of kinetic energy, the discrete Planck constant, and discrete gravitational interactions are proportional to the quantum of intrinsic energy, Lambda-bar, the factor 10,800 is a conversion factor.
In a bidirectional gravitational interaction, the ratio of the square of the discrete Planck constant divided by the square of Lambda-bar is equal to 10,800.
In a one-directional gravitational interaction the ratio of the discrete Planck constant divided by Lambda-bar is equal to the square root of 10,800.
The discrete Planck constant is equal to Lambda-bar times the square root of 10,800 and denominated in units of Joule second.
The value of the discrete Planck constant, approximately 1.006 times larger than the 2018 CODATA value, is the correct value for the two-body earth-sun system and proportional to the intrinsic physical constants previously defined.
The CODATA fine structure constant alpha is equal to the ratio of the square of the CODATA electron charge divided by the product of two times the CODATA Planck constant, the CODATA vacuum permittivity and the CODATA speed of light (2018 CODATA values).
The intrinsic constant Beta is a transformation of the CODATA expression.
By substitution of the charge intrinsic energy for the CODATA electron charge, Lambda-bar for two times the CODATA Planck constant, the intrinsic electric constant for the CODATA vacuum permittivity and the intrinsic photon velocity for the CODATA speed of light, the dimensionless CODATA fine structure constant alpha is transformed into the dimensionless intrinsic constant Beta.
The existence of the fine structure constant and its ubiquitous appearance in seemingly unrelated equations is due to the assumption that phenomena are governed by kinetic energy, consequently measured values of phenomena governed or partly governed by intrinsic energy do not agree with the theoretical expectations.
A gravitational phenomenon governed by intrinsic energy is the solar system Kepler constant equal to the square root of the cube of the planet’s orbital distance divided by 4π-square times the orbital period of the planet, the product of the active gravitational mass of the sun and the Gravitational constant at the orbital distance of earth divided by 4π-square, and the ratio of the product of the square of the planet’s velocity and the orbital distance of the planet divided by 4π-square.
The intrinsic constant Beta-square, previously shown to be the ratio of the active gravitational mass of earth divided by the active gravitational mass of the sun, is also proportional to the key orbital properties of the sun, earth, and moon.
An electromagnetic phenomenon governed by intrinsic energy is the proton-electron mass ratio, here termed the electron-proton deflection ratio, equal to the square root of the cube of the proton intrinsic energy divided by the cube of the electron intrinsic energy, and to the square root of the cube of the proton amplitude divided by the cube of the unit electron amplitude.
The CODATA proton-electron mass ratio is a measure of electron deflection (1836.15267344) in units of proton deflection (equal to 1). Because the directions of proton and electron deflections are opposite, the electron-proton deflection ratio is approximately equal to the CODATA proton-electron mass ratio plus one.
In this document, unless otherwise specified (as in CODATA constants denominated in units of Joule proportional to the CODATA Planck constant), units of Joule are proportional to the discrete Planck constant.
The ratio of the discrete Planck constant divided by Lambda-bar, equal to the product of the mass-energy factor delta and omega-2, is denominated in units of discrete Joule per Einstein.
In the above equation the denomination discrete Joule represents energy proportional to the discrete Planck constant and the denomination Einstein represents energy proportional to Lambda-bar. The mass-energy factor delta converts non-collisional energy (action at a distance) into collisional energy in units of intrinsic Joule. The factor omega-2 converts units of intrinsic Joule into units of discrete Joule.
Omega factors correspond to the geometry of graviton-mediated and quanton-mediated phenomena.
We will begin with a brief discussion of electrical (quanton-mediated) phenomena then exclusively focus on gravitational phenomena for the remainder of Part One.
Electrical phenomena
The discrete steric factor, equal to 8, is the number of octants defined by the orthogonal planes of a Cartesian space.
Each octant is one of eight signed triplets (—, -+-, -++, –+, +++, +-+, +–, ++-) which correspond to the direction of the x, y, and z Cartesian axes.
A large number of random molecules, each with a velocity coincident with its center of mass, are within a Cartesian space. If the origin is the center of mass of specific molecule1, then random molecule2 is within one of the eight signed octants and, because the same number of random molecules are within each octant, then the specific molecule1 is within one of the eight signed octants with respect to random molecule2, and the possibility (not probability) of a center of mass collisional interaction between molecule2 and molecule1 is equal to the inverse of the discrete steric factor (one in eight).
The discrete and intrinsic steric factors correspond to the geometries of phenomena governed by discrete kinetic energy (proportional to the discrete Planck constant) and to phenomena governed by intrinsic energy:
The discrete steric factor points inward from a random molecule in the direction of a specific molecule and applies to phenomena mediated by collisional interactions.
The intrinsic steric factor points outward from a specific particle into the surrounding space and applies to phenomena mediated by gravitons and quantons (action at a distance).
The intrinsic molar gas constant, equal to the discrete steric factor, is the intrinsic energy (units of intrinsic Joule) divided by mole Kelvin.
The discrete molar gas constant, equal to the product of the intrinsic molar gas constant and omega-2, is the intrinsic energy (units of discrete Joule) divided by mole Kelvin. The discrete molar gas constant agrees with the CODATA value within 1 part in 13,000.
The ratio of the CODATA electron charge (the elementary charge in units of Coulomb) divided by the charge intrinsic energy (in units of intrinsic Volt) is nearly equal to the discrete molar gas constant.
The intrinsic Boltzmann constant, equal to the ratio of the intrinsic molar gas constant divided by the Avogadro constant, is denominated in units of Einstein per Kelvin.
The discrete Boltzmann constant, equal to the product of omega-2 and the intrinsic Boltzmann constant, and the ratio of the discrete molar gas constant divided by the Avogadro constant, is denominated in units of discrete Joule per Kelvin. The CODATA Boltzmann constant is equal to 1.380649×10-23.
Gravitational phenomena
Omega-2, the square root of 1.08, corresponds to one-directional gravitational interactions between non-orbiting objects (objects not by themselves in orbit, that is, the object might be part of an orbiting body but the object itself is not the orbiting body), for example graviton emission by the large lead balls or absorption by the small lead balls in the Cavendish experiment.
Omega-4, 1.08, corresponds to two-directional gravitational interactions (emission and absorption) between non-orbiting objects, for example the acceleration of the large lead balls or the acceleration of the small lead balls in the Cavendish experiment.
Omega-6, the square root of the cube of 1.08, corresponds to gravitational interactions between a planet and moon in a Keplerian orbit where the square root of the cube of the orbital distance divided by the orbital period is equal to a constant.
Omega-8, the square of 1.08, corresponds to four-directional gravitational interactions by non-orbiting objects, for example the acceleration of the small lead balls and the acceleration of the large lead balls in the Cavendish experiment.
Omega-12, equal to the cube of 1.08, corresponds to gravitational interactions between two objects in orbit about each other, for example the sun and a planet in orbit about their mutual barycenter.
Except where previously defined (the Gravitational constant at the orbital distance of earth, the orbital distance of earth, the mass and volumetric radius of earth, the mass of the sun) the following equations use the NASA2 values for the Newtonian masses, orbital distances, and volumetric radii of the planets.
The local gravitational constant for any of the planets is equal to the product of the Gravitational constant of earth and the Newtonian mass (kilogram mass) of the planet divided by the square of the volumetric radius of the planet.
The v2d value of a planetary moon is equal to the product of the Gravitational constant at the orbital distance of earth and the Newtonian mass of the planet.
The active gravitational mass of a planet, denominated in units of Einstein, is equal to the product of the square of the volumetric radius of the planet and the orbital distance of the planet, divided by the square of the orbital distance of the planet in units of the orbital distance of earth.
The mass of a planet in a Newtonian orbit about the sun (the planet and sun orbit about their mutual barycenter) is a kinetic property. The active gravitational mass of such a planet, denominated in units of Joule, is equal to the product of the active gravitational mass of the planet in units of Einstein and omega-12.
The Gravitational constant at the orbital distance of the planet is equal to the product of the local gravitational constant of the planet and the square of the volumetric radius of the planet, divided by the active gravitational mass of the planet.
The v2d value of a planetary moon is equal to the product of the Gravitational constant at the orbital distance of the planet and the active gravitational mass of the planet.
The v2d values calculated using the NASA orbital parameters for the moon is larger than the above calculated values by 1.00374; the v2d calculations using the NASA orbital parameters for the major Jovian moons (Io, Europa, Ganymede and Callisto) are larger than the above calculated values by 1.0020, 1.0016, 1.00131, and 1.00133.
Newtonian gravitational calculations are extremely accurate for most gravitational phenomena but there are a number of anomalies for which the Newtonian calculations are inaccurate. The first of these anomalies to come to the attention of scientists in 1859 was the precession rate of the perihelion of mercury for which the observed rate was about 43 arcseconds per century larger than the Newtonian calculated rate.3
According to Gerald Clemence, one of the twentieth century’s leading authorities on the subject of planetary orbital calculations, the most accurate method for calculating planetary orbits, the method of Gauss, was derived for calculating planetary orbits within the solar system with distance expressed in astronomical units, orbital period in days and mass in solar masses.4
The Gaussian method was used by Eric Doolittle in what Clemence believed to be the most reliable theoretical calculation of the perihelion precession rate of mercury.5
With modifications by Clemence including newer values for planetary masses, newer measurements of the precession of the equinoxes and a careful analysis of the error terms, the calculated rate was determined to be 531.534 arc-seconds per century compared to the observed rate of 574.095 arc-seconds per century, leaving an unaccounted deficit of 42.561 arcseconds per century.
The below calculations are based on the method of Price and Rush.6 This method determines a Newtonian rate of precession due to the gravitational influences on mercury by the sun and five outer planets external to the orbit of mercury (venus, earth, mars, jupiter and saturn) The solar and planetary masses are treated as Newtonian objects and in calculations of planetary gravitational influences the outer planets are treated as circular mass rings.
The Newtonian gravitational force on mercury due to the mass of the sun is equal to ratio of the product of the negative Gravitational constant at the orbital distance of earth, the mass of the sun and the mass of mercury divided by the square of the orbital distance of mercury.
The Newtonian gravitational force on mercury due to the mass of the five outer planets is equal to the sum of the gravitational force contributions of the five outer planets external to the orbit of mercury. The gravitational force contribution of each planet is equal to the ratio of the product of the Gravitational constant at the orbital distance of earth, the mass of the planet, the mass of mercury and the orbital distance of mercury, divided by the ratio of the product of twice the planet’s orbital distance and the difference between the square of the planet’s orbital distance and the square of the orbital distance of mercury.
The gravitational force ratio is equal to the gravitational force on mercury due to the mass of the five outer planets external to the orbit of mercury divided by the gravitational force on mercury due to the mass of the sun.
The gamma factor is equal to the sum of the gamma contributions of the five outer planets external to the orbit of mercury. The gamma contribution of each planet is equal to the ratio of the product of the mass of the planet, the orbital distance of mercury, and the sum of the square of the planet’s orbital distance and the square of the orbital distance of mercury, divided by the product of 2π, the planet’s orbital distance and the square of the difference between the square of the planet’s orbital distance and the square of the orbital distance of mercury.
Psi-mercury is equal to the product of π and the sum of one plus the difference between the negative of the gravitational force ratio and the ratio of the product of the Gravitational constant at the orbital distance of earth, π, the mass of mercury and the gamma factor divided by twice the gravitational force on mercury due to the mass of the sun.
The number of arc-seconds in one revolution is equal to 360 degrees times sixty minutes times sixty seconds.
The number of days in a Julian century is equal to 100 times the length of a Julian year in days.
The perihelion precession rate of mercury is equal to the ratio of the product of the difference between 2ψ-mercury and 2π, the number of arc-seconds in one revolution and the number of days in a Julian century, divided by the product of 2π and the NASA sidereal orbital period of mercury in units of day (87.969).
The Newtonian perihelion precession rate of mercury determined above is 0.139 arc-seconds per century less than the Clemence calculated rate of 531.534 arc-seconds per century.
The following equations, the same format as the Newtonian equations, derive the non-Newtonian values (when different).
The Newtonian gravitational force on mercury due to the mass of the sun is equal to ratio of the product of the negative Gravitational constant at the orbital distance of earth, the mass of the sun and the mass of mercury divided by the square of the orbital distance of mercury.
The non-Newtonian gravitational force on mercury due to the mass of the five outer planets is equal to the sum of the gravitational force contributions of the five outer planets external to the orbit of mercury. The gravitational force contribution of each planet is equal to the product of the ratio of the product of the Gravitational constant at the orbital distance of earth, the active gravitational mass (in units of Joule) of the planet, the Newtonian mass of mercury and the orbital distance of mercury, divided by the ratio of the product of twice the planet’s orbital distance and the difference between the square of the planet’s orbital distance and the square of the orbital distance of mercury.
The non-Newtonian gravitational force ratio is equal to the gravitational force on mercury due to the mass of the five outer planets external to the orbit of mercury divided by the gravitational force on mercury due to the mass of the sun.
The gamma factor is equal to the sum of the gamma contributions of the five outer planets external to the orbit of mercury. The gamma contribution of each planet is equal to the ratio of the product of the mass of the planet, the orbital distance of mercury, and the sum of the square of the planet’s orbital distance and the square of the orbital distance of mercury, divided by the product of 2π, the planet’s orbital distance and the square of the difference between the square of the planet’s orbital distance and the square of the orbital distance of mercury.
The non-Newtonian value for Psi-mercury is equal to the product of π and the sum of one plus the difference between the negative of the gravitational force ratio and the ratio of the product of the Gravitational constant at the orbital distance of earth, π, the mass of mercury and the gamma factor divided by twice the gravitational force on mercury due to the mass of the sun.
The non-Newtonian perihelion precession rate of mercury is equal to the ratio of the product of the difference between 2ψ-mercury and 2π, the number of arc-seconds in one revolution and the number of days in a Julian century, divided by the product of 2π and the NASA sidereal orbital period of mercury in units of day (87.969).
The non-Newtonian perihelion precession rate of mercury is 6.128 arc-seconds per century greater than the Clemence observed rate of 574.095 arc-seconds per century.
We have built a model of gravitation proportional to the dimensions of the earth-sun system. A different model, with different values for the physical constants, would be equally valid if it were proportional to the dimensions of a different planet in our solar system or a planet in some other star system in our galaxy.
Our sun and the stars in our galaxy, in addition to graviton flux, emit large quantities of neutral flux that establish Stable Balance orbits for planets that emit relatively small quantities of neutral flux.
Our galactic center emits huge quantities of gravitons and neutral flux, and its dimensional relationship with our sun is dependent on the neutral flux emissions of our sun. If the intrinsic energy of our sun was less, its orbit would be further out from the galactic center, and if it was greater, its orbit would be closer in.
Of two stars at the same distance from the galactic center with different velocities, the star with higher velocity has a higher graviton absorption rate (higher stellar internal energy) and the star with lower velocity has a lower graviton absorption rate (lower stellar internal energy).
Of two stars with the same velocity at different distances from the galactic center, the star closer in will have a higher graviton absorption rate (higher stellar internal energy) and the star further out will have a lower graviton absorption rate (lower stellar internal energy).
The active gravitational mass of the Galactic Center is equal to the active gravitational mass of the sun divided by Beta-fourth and the cube of the active gravitational mass of the sun divided by the square of the active gravitational mass of earth.
The second expression of the above equation, generalized and reformatted, asserts the square root of the cube of the active gravitational mass of any star in the Milky Way divided by the active gravitational mass of any planet in orbit about the star is equal to a constant.
The above equation, combined with the detailed explanation of the chirality meshing interactions that mediate gravitational action at a distance, the derivation of solar system non-Newtonian orbital parameters, the derivation of the non-Newtonian rate of precession of the perihelion of mercury, and the detailed explanation of non-Newtonian stellar rotation curves, disproves the theory of dark matter.
Part Two
Structure and chirality
A particle has the property of chirality because its axes are orthogonal and directed, pointing in three perpendicular directions and, like the fingers of a human hand, the directed axes are either left-handed (LH) or right-handed (RH). The electron and antiproton exhibit LH structural chirality and the proton and positron exhibit RH structural chirality. The two chiralities are mirror images.
The electron G-axis (black, index finger) points into the paper, the electron Q-axis (blue, thumb) points up in the plane of the paper, and the north pole of the electron P-axis (red, middle finger) points right in the plane of the paper.
The orientation of the axes of an RH proton are the mirror image: the proton G-axis (black, index finger) points into the paper, the proton Q-axis (blue, thumb) points up in the plane of the paper, and the north pole of the proton P-axis (red, middle finger) points left in the plane of the paper.
Above, to visualize orientations, models are easier to manipulate than human hands.
When Michael Faraday invented the disk generator in 1831, he discovered the conversion of rotational force, in the presence of a magnetic field, into electric current. The apparatus creates a magnetic field perpendicular to a hand-cranked rotating conductive disk and, providing the circuit is completed through a path external to the disk, produces an electric current flowing inward from axle to rim (electron flow not conventional current), photograph below.7
Above left, the electron Q-axis points in the CCW direction of motion. The inertial force within a rotating conductive disk aligns conduction electron G-axes to point in the direction of the rim. The alignment of the Q-axes and G-axes causes the orthogonal P-axes to point down.
Above right, the electron Q-axis points in the CW direction of motion. The inertial force within a rotating conductive disk aligns conduction electron G-axes to point in the direction of the rim. The alignment of the Q-axes and G-axes causes the orthogonal P-axes to point up.
In generally accepted physics (GAP), the transverse alignment of electron velocity with respect to magnetic field direction is attributed to the Lorentz force but, as explained above it is a consequence of electron chirality.
In addition to the transverse alignment of the electron direction with respect to the direction of the magnetic field, the electron experiences an additional directional change of 20 arcseconds in the azimuthal direction which causes the electron to spiral in the direction of the axle. Thus, in both a CCW rotating conductive disk and a CW rotating conductive disk, the current (electron flow not conventional current) flows from the axle to the rim.
The geometries of the Faraday disk generator apply to the orientation of conduction electrons in the windings of solenoids and transformers. CCW and CW windings advance in the same direction, below into the plane of the paper. In contrast to the rotating conductor in the disk generator, the windings are stationary, and the conduction electrons spiral through in the direction of the positive voltage supply (which continually reverses in transformers and AC solenoids).
Above left, the electron Q-axes point down in the direction of current flow through the CCW winding. The inertial force on conduction electrons moving through the CCW winding aligns the direction of the electron G-axes to the left. The electron P-axes, perpendicular to both the Q-axes and P-axes, point S→N out of the paper.
Above right, the electron Q-axes point up in the direction of current flow through the CW winding. The inertial force on conduction electrons moving through the CW winding aligns the direction of the electron G-axes to the left. The electron P-axes, perpendicular to both the Q-axes and G-axes, point S→N into the paper.
Above is a turnbuckle composed of a metal frame tapped at each end. On the left end an LH bolt passes through an LH thread and on the right end an RH bolt passes through an RH thread. If the LH bolt is turned CCW (facing right into the turnbuckle frame) the bolt moves to the right and the frame moves to the left and if the LH bolt is turned CW the bolt moves to the left and the frame moves to the right. If the RH bolt is turned CW (facing left into the turnbuckle frame) the bolt moves to the left and the frame moves to the right and if the RH bolt is turned CCW the bolt moves to the right and the frame moves to the left.
In the language of this analogy, a graviton or quanton emitted by the emitting particle is a moving spinning bolt, and the absorbing particle is a turnbuckle frame with a G-axis, Q-axis or P-axis passing through.
In a chirality meshing interaction, absorption of a graviton or quanton by the LH or RH G-axis, Q-axis or P-axis of a particle, causes an attractive or repulsive acceleration proportional to the difference between the graviton or quanton velocity and the velocity of the absorbing particle.
An electron G-axis has a RH inside thread and a proton G-axis has a LH inside thread. An electron G-axis emits CW gravitons and a proton G-axis emits CCW gravitons.
In the bolt-turnbuckle analogy, a graviton is a moving spinning bolt, and the absorbing particle through which the G-axis passes is a turnbuckle frame:
If a CCW graviton emitted by a proton is absorbed into a proton LH G-axis, the absorbing proton is attracted, accelerated in the direction of the emitting proton.
If a CW graviton emitted by an electron is absorbed into an electron RH G-axis, the absorbing electron is attracted, accelerated in the direction of the emitting electron.
Protons and electrons do not gravitationally interact with each other because a proton is larger than an electron, a graviton emitted by a proton is larger than a graviton emitted by an electron, the inside thread of a proton G-axis is larger than the inside thread of an electron G-axis, and the size differences prevent the ability of a graviton emitted by an electron to mesh with a proton G-axis or a graviton emitted by a proton to mesh with an electron G-axis.
Tangible objects are composed of atoms which are composed of protons, electrons and neutrons.
In gravitational interactions between tangible objects (with kilogram mass greater than one microgram or 1E20 particles) the total intensity of the interaction is the sum of the contributions of the electrons and protons of which the object is composed (note that neutrons themselves do not gravitationally interact but each neutron is composed of one electron and one proton both of which do gravitationally interact).
A particle Q-axis is a single-ended hollow cylinder. The mechanism of the Q-axis is analogous to a piston which moves up and down at a frequency proportional to charge intrinsic energy. At the end of each up-stroke a single quanton is emitted. The absorption window opens at the beginning of the up-stroke and remains open until the beginning of the downstroke or the absorption of a single quanton.
The difference (the intrinsic granularity) between the inside diameter of the hollow cylindrical Q-axis and the outside diameter of the quanton allows absorption of incoming quantons at angles that can deviate from normal (straight down the center) by plus or minus 20 arcseconds.
An electron Q-axis has a RH inside thread and a proton Q-axis has a LH inside thread. An electron Q-axis emits CCW quantons and a proton Q-axis emits CW quantons.
In the bolt-turnbuckle analogy, a quanton is a moving spinning bolt, and the absorbing particle through which the G-axis passes is a turnbuckle frame:
If a CCW p-quanton emitted by a proton is absorbed into an electron RH Q-axis, the absorbing electron is attracted, accelerated in the direction of the emitting proton.
If a CCW p-quanton emitted by a proton (or the anode plate in a CRT) is absorbed into a proton LH Q-axis, the absorbing proton is repulsed, accelerated in the direction of the cathode plate (opposite the direction of the emitting proton).
If a CW e-quanton emitted by an electron is absorbed into an electron RH Q-axis, the absorbing electron is repulsed, accelerated in the direction opposite the emitting electron.
If a CW e-quanton emitted by an electron (or the cathode plate in a CRT) is absorbed into a proton LH Q-axis, the absorbing proton is repulsed, accelerated in the direction of the cathode plate (the direction opposite the emitting electron).
In a CRT, the Q-axis of an accelerated electron is oriented in the linear direction of travel and its P-G-axis are oriented transverse to the linear direction of travel. After the electron is linearly accelerated, the electron passes between oppositely charged parallel plates that emit quantons perpendicular to the linear direction of travel and these e-quantons are absorbed into the electron P-axes. The chirality meshing interactions between an electron with a linear direction of travel and a quantons emitted by either plate results in a transverse acceleration in the direction of the anode plate:
An incoming CCW p-quanton approaching an electron RH P-axis within less than 20 arcseconds deviation from normal (straight down the center) is absorbed in an attractive chirality meshing interaction in which the electron is deflected in the direction of the anode plate.
An incoming CW e-quanton approaching an electron RH P-axis within less than 20 arcseconds deviation from normal (straight down the center) is absorbed in a repulsive chirality meshing interaction in which the electron is deflected in the direction of the anode plate.
This is the mechanism of the experimental determination of the electron-proton deflection ratio.
The magnitude of the ratio between these masses is not equal to the ratio of the measured gravitational deflections but rather to the inverse of the ratio of the measured electric deflections. It would not matter which of these measurable quantities were used in the experimental determination if Newton’s laws of motion applied. However, in order for Newton’s laws to apply the assumptions behind Newtons laws, specifically the 100% probability that particles gravitationally and electrically interact, must also apply. But this is not the case for action at a distance.
The electron orientation below top left, rotated 90 degrees CCW, is identical to the electron orientations previously illustrated for a CW disk generator or a CW-wound transformer or solenoid; and the electron orientation bottom left is a 180 degree rotation of top left.
Above are reversals in Q-axis orientation due to reversals in direction of incoming quantons
Above top right and bottom right are the left-side electron orientations with the electron Q-axis directed into the plane of the paper (confirmation of the perspective transformation is easier to visualize with a model). These are the orientations of conduction electrons in an AC current.
In the top row CW quantons, emitted by the positive voltage source are absorbed in chirality meshing interactions by the electron RH Q-axis, attracting the absorbing electron. In the bottom row CCW quantons, emitted by the negative voltage source are absorbed in chirality meshing interactions into the electron RH Q-axis repelling the absorbing electron.
In either case the direction of current is into the paper.
In an AC current, a reversal in the direction of current is also a reversal in the rotational chirality of the quantons mediating the current.
In a current moving in the direction of a positive voltage source each linear chirality meshing absorption of a CW p-quanton into an electron RH Q-axis results in an attractive deflection.
In a current moving in the direction of a negative voltage source each linear chirality meshing absorption of a CCW e-quanton into an electron RH Q-axis results in a repulsive deflection.
In an AC current, each reversal in the direction of current, reverses the direction of the Q-axes of the conduction electrons. This reversal in direction is due to a complex rotation (two simultaneous 180 degree rotations) that results in photon emission.
During a shorter or longer period of time (the inverse of the AC frequency) during which the direction of current reverses, a shorter or longer inductive pulse of electromagnetic energy flows into the electron Q and P axes and the quantons of which the electromagnetic energy is composed are absorbed in rotational chirality meshing interactions.
Above left, the electron P and Q axes mesh together at their mutual orthogonal origin in a mechanism analogous to a right angle bevel gear linkage.8
Above center and right, an incoming CCW quanton induces an inward CCW rotation in the Q-axis and causes a CW outward (CCW inward) rotation of the P-axis. The rotation of the Q-axis reverses the orientation of the P-axis and G-axis, and the rotation of the P-axis reverses the orientation of the Q-axis and the orientation of the G-axis thereby restoring its orientation to the initial direction pointing left and perpendicular to a tangent to the cylindrical wire.
Above center and right, an incoming CW quanton induces an inward CW rotation in the Q-axis and causes a CCW outward (CW inward) rotation of the P-axis. The rotation of the Q-axis reverses the orientation of the P-axis and G-axis, and the rotation of the P-axis reverses the orientation of the Q-axis and the orientation of the G-axis thereby restoring its orientation to the initial direction pointing left and perpendicular to a tangent to the cylindrical wire.
In either case the electron orientations are identical, but CCW electron rotations cause the emission of CCW photons and CW electron rotations cause the emission of CW photons.
The absorption of CCW e-quantons by the Q-axis rotates the Q-axis CCW by the square root of 648,000 arcseconds (180 degrees) and the P-Q axis linkage simultaneously rotates the P-axis CW by the square root of 648,000 arcseconds (180 degrees).
If the orientation of the electron G-axis is into the paper in a plane defined by the direction of the Q-axis, the CCW rotation of the Q-axis tilts the plane of the G-axis down by the square root of 648,000 arcseconds and the CW rotation of the P-axis tilts the plane of the G-axis to the right by the square root of 648,000 arcseconds.
The net rotation of the electron G-axis is equal to the product of the square root of 648,000 arcseconds and the square root of 648,000 arcseconds.
In the production of photons by an AC current, the photon wavelength and frequency are proportional to the current reversal time, and the photon energy is proportional to the voltage.
Above, an axial projection of the helical path of a photon traces the circumference of a circle and the sine and cosine are transverse orthogonal projections.9 The crest to crest distance of the transverse orthogonal projections, or the distance between alternate crossings of the horizontal axis, is the photon wavelength.
The helical path of photons explains diffraction by a single slit, by a double slit, by an opaque circular disk, or a sphere (Arago spot).
In a beam of photons with velocity perpendicular to a flat screen or sensor, each individual photon makes a separate impact that can be sensed or is visible somewhere on the circumference of one of many separate and non-overlapping circles corresponding to all of the photons in the beam. The divergence of the beam increases the spacing between circles and the diameter of each individual photon circle which is proportional to the wavelength of each individual photon. The sensed or visible photon impacts form a region of constant intensity.
Below, the top image shows those photons, initially part of a photon beam illuminating a single slit, which passed through the single slit.10
Above, the bottom image shows those photons, initially part of a photon beam illuminating a double slit, that passed through a double slit.
Below, the image illustrating classical rays of light passing through a double slit is equally illustrative of a photon beam illuminating a double slit but, instead of constructive and destructive interference, the photons passing through the top slit diverge to the right and photons passing through the bottom slit diverge to the left. The spaces between divergent circles are dark and, due to coherence, the photon circles are brightest at the distance of maximum overlap, resulting in the characteristic double slit brighter-darker diffraction pattern.11
The mechanism of diffraction by an opaque circular disk or a sphere (Arago spot) is the same. In either case the opaque circular disk or sphere is illuminated by a photon beam of diameter larger than the diameter of the disk or sphere.
The photons passing close to the edge of the disk or sphere diverge inwards, and the spiraling helical path of a inwardly diverging CW photon passing one side of the disk will intersect in a head-on collision the spiraling helical path of a inwardly diverging CCW photon passing on the directly opposite side of the disk or sphere (if the opposite chirality photons are equidistant from the center of the disk or sphere).
In the case of a sphere illuminated by a laser, the surface of the sphere must be smooth and the ratio of the square of the diameter of the sphere divided by the product of the distance from the center of the sphere to the screen and the laser wavelength must be greater than one (similar to the Fresnel number).
Photon velocity
Constant photon velocity is due to a resonance driven by the emission of photon intrinsic energy which results in an increase in wavelength and a proportional decrease in frequency. In a related phenomenon, Arthur Holly Compton demonstrated Compton scattering in which the loss of photon kinetic energy does not change velocity but increases wavelength and proportionally decreases frequency.12
The mechanism of constant photon velocity is the emission of quantons and gravitons.
Below top, looking down into the plane of the paper a photon G-axis points in the direction of photon velocity and the P and Q-axes are orthogonal. In the language of the turnbuckle analogy, the mechanism of the photon P and Q-axes are analogous to pistons which move up and down or back and forth and emit a single quanton or graviton at the end of each stroke.
Above middle, in column A of the P-axis row, at the position of the oscillation the up-stroke has just completed, a single graviton has been emitted, and the current direction of the oscillation is now down. In column B of the P-axis row, the position of the oscillation is mid-way, and the direction of the oscillation is down. In column C of the P-axis row, at the position of the oscillation the downstroke has just completed, a single graviton has been emitted, and the current direction of the oscillation is up. In column D of the P-axis row, the position of the oscillation is mid-way, and the direction of the oscillation is up.
Above middle, in column A of the Q-axis row, the position of the oscillation is mid-way and the direction of oscillation is left. In column B of the Q-axis row, at the position of the oscillation the left-stroke has just completed, a single quanton has been emitted, and the current direction of the oscillation is right. In column C of the Q-axis row, the position of the oscillation is mid-way and the direction of the oscillation is right. In column D of the Q-axis row, at the position of the oscillation the right-stroke has just completed, a single quanton has been emitted, and the current direction of the oscillation is left.
Above left or right bottom, in each cycle of the photon frequency there are eight sequential CCW or CW alternating quanton/graviton emissions and the intrinsic energy of the photon is reduced by Lambda-bar on each emission.
This is the mechanism of intrinsic redshift.
Part Three
Nuclear magnetic resonance
In the 1922 Stern-Gerlach experiment, a molecular beam of identical silver atoms passed through an inhomogeneous magnetic field. Contrary to classical expectations, the beam of atoms did not diverge into a cone with intensity highest at the center and lowest at the outside. Instead, atoms near the center of the beam were deflected with half the silver atoms deposited on a glass slide in an upper zone and half deposited in a lower zone, illustrating “space quantization.”
The Stern-Gerlach experiment, designed to test directional quantization in a magnetic field as predicted by old quantum theory (the Bohr-Sommerfeld hypothesis)13, was conducted two years before intrinsic spin was conceived by Wolfgang Pauli and six years before Paul Dirac formalized the concept. Intrinsic spin became part of the foundation of new quantum theory.
The concept of intrinsic spin, where the property that causes the deflection of silver atoms in two opposite directions “space quantization” is inherent in the particle itself, is incorrect.
However, a molecular beam composed of atoms with magnetic moments passed through a Stern-Gerlach apparatus does exhibit the numerical property attributed to intrinsic spin but this property, interactional spin, is not inherent in the atom but is dependent on external factors.
The protons within a nucleus are the origin of spin, magnetic moment, Larmor frequency, and other nuclear gyromagnetic properties. A nucleus contains “ordinary protons” which, for clarity, will be termed Pprotons, and “protons within neutrons” will be termed Nprotons.
In nuclei with an even number of Pprotons, the Pproton magnetic flux is contained within the nucleus and does not contribute to the nuclear magnetic moment.
With neutrons the situation is quite different. A neutron is achiral: it is a composite particle composed of an Nproton-electron pair and binding energy, it has no G-axis therefore does not gravitationally interact, and no Q-axis therefore is electrically neutral.
Within a nucleus, a neutron does not have a magnetic moment (during its less than 15-minute mean lifetime after a neutron is emitted from its nucleus, a free neutron has a measurable magnetic moment, but there are no free neutrons within nuclei) but the Nproton and electron of which a neutron is composed do have magnetic moments.
The gyromagnetic properties of a nucleus, its magnetic moment, its spin, its Larmor frequency, and its gyromagnetic ratio are due to Pprotons and Nprotons.
A molecular beam (composed of nuclei, atoms and/or molecules) emerging from an oven into a vacuum will have a thermal distribution of velocities. Molecules within the beam are subject to collisions with faster or slower molecules that cause rotations and vibrations, and the orientations of unpaired Pprotons and unpaired Nprotons are constantly subject to change.
In a silver atom there is a single unpaired Pproton and the orientation of its P-axis, with respect to its direction of motion through an inhomogeneous magnetic field, will be either leading or trailing. Out of a large number of unpaired Pprotons, the P-axes will be leading 50% of the time and trailing 50% of the time, and a silver atom containing an unpaired Pproton with a leading P-axis can be deflected in the direction of the inhomogeneous magnetic north pole while a silver atom containing an unpaired Pproton with a trailing P-axis can be deflected in the direction of the south pole.
If the magnetic field is strong enough for a sufficient percentage of unpaired Pprotons (the orientation of which is constantly changing) to encounter within 20 arcseconds lines of magnetic flux and be deflected up or down, the molecular beam of silver atoms deposited on a glass slide at the center of the magnetic field (where it is strongest) will be split into two zones and, consistent with the definition of spin as equal to the difference between the number of zones minus one divided by 2 (S = (z-1)/2), a Stern-Gerlach experiment determines a spin equal to ½. This result is the only example of spin clearly determined by the position of atoms deposited on a glass slide.14
The above explanation is correct for silver atoms passed through the inhomogeneous magnetic fields of the Stern-Gerlach apparatus, but in the 1939 Rabi experimental apparatus15 (upon which modern molecular beam apparatus are modeled) the mechanism of deflection due to leading or trailing P-axes has nothing to do with the results achieved.
The 1939 Rabi experimental apparatus included back-to-back Stern-Gerlach inhomogeneous magnetic fields with opposite magnetic field orientations, but the result that dramatically changed physics, the accurate measurement of the Larmor frequency of nuclei, was done in a separate Rabi analyzer placed between the inhomogeneous magnetic fields. To Rabi, the importance of the Stern-Gerlach inhomogeneous magnets was for use in the alignment and tuning of the entire apparatus.
In a Rabi analyzer there is a strong constant magnetic field and a weaker transverse oscillating magnetic field. The purpose of the strong constant field is to decouple (increase the separation distance between) electrons and protons. The purpose of the transverse oscillating field is to stimulate the emission of photons by the decoupled protons.
When the Rabi apparatus is initially assembled, before installation of the Rabi analyzer the Stern-Gerlach apparatus is set up and tuned such that the intensity of the molecular beam leaving the apparatus is equal to its intensity upon entering.
After the unpowered Rabi analyzer is mounted between the Stern-Gerlach magnets, and the molecular beam exiting the first inhomogeneous magnetic field passes through the Rabi analyzer and enters the second inhomogeneous magnetic field, the intensity of the molecular beam leaving the apparatus decreases. In this state the entire Rabi apparatus is tuned and adjusted until the intensity of the entering molecular beam is equal to the intensity of the exiting beam.
When the crossed magnetic fields of the Rabi analyzer are switched on, for a second time the intensity of the exiting beam decreases. Then, by adjustment of the relative positions and orientations of the three magnetic fields (and also adjustment of the detector position to optimally align with decoupled protons in the nucleus of interest) the intensity of the exiting beam is returned to its initial value.
During an operational run, the transverse oscillating field stimulates the emission of photons at the same frequency as that of the transverse oscillating magnetic field. The ratio of the photon frequency divided by the strength of the strong magnetic field is equal to the Larmor frequency of the nucleus, and the Larmor frequency divided by the strong magnetic field strength is equal to the gyromagnetic ratio. The Larmor frequency has a very sharp resonant peak limited only by the accuracy of the two experimental measurables: the intensity of the strong magnetic field and the frequency of the oscillating weak magnetic field.
The gyromagnetic ratios of Li6, Li7, and F19, experimentally determined by Rabi in 1939, agree with the 2014 INDC16 values to better than 1 part in 60,000. Importantly, measurements of the gyromagnetic ratios of Li6 and Li7 were made in three different lithium molecules (LiCl, LiF, and Li2) requiring three separate operational runs, thereby demonstrating the Rabi analyzer was adjusted to optimally detect the nucleus of interest.
Modern determinations of spin are based on various types of spectroscopy, the results of which stand out as peaks in the collected data.
The magnetic flux of nuclei with an even number of Pprotons and Nprotons circulates in flux loops between pairs of Pprotons and pairs of Nprotons, and such nuclei do not have magnetic moments. The flux loops within nuclei with an odd number of Pprotons and/or Nprotons do have magnetic moments. In order for all nuclei of the same isotope to have zero or non-zero magnetic moments of the same amplitude, it is necessary for the magnetic flux loops to be circulating in the same plane.
All of the 106 selected magnetic nuclear isotopes from Lithium and Uranium, including all stable isotopes with atomic number (Z) greater than 2, plus a number of important isotopes with relatively long half-lives, belong to one of twelve different Types. The Type is determined based the spin of the isotope and the number of odd and even Pprotons and Nprotons.
An isotope contains an internal physical structure to which the property of magnetic moment correlates, but the magnetic moment is not entirely determined by the internal physical structure of a nucleus. The property of interactional spin is that portion of the magnetic moment due to factors external to the nucleus, including electromagnetic radiation, magnetic fields, electric fields and excitation energy.
Of significance to the present discussion, the detectable magnetic properties of 82 of the 106 selected isotopes (the relative spatial orientations of the flux loops associated with the Pprotons and Nprotons) can be manipulated by four different orientations of directed planar electric fields.
The magnetic signatures of the 106 selected isotopes can be sorted into twelve isotope Types with seven spin values.
Spin ½ isotopes with an odd number of Pprotons and even number of Nprotons are Type A-0. Of the 106 selected isotopes, 10 are Type A-0.
Spin ½ isotopes with an even number of Pprotons and odd number of Nprotons (odd/even Reversed) are Type RA-0. Of the 106 selected isotopes, 14 are Type RA-0.
Spin 1 isotopes with an odd number of Pprotons and an odd number of Nprotons are Type B-1. Of the 106 selected isotopes, 2 are Type B-1.
Spin 3/2 isotopes with an odd number of Pprotons and even number of Nprotons are Type C-1. Of the 106 selected isotopes, 18 are Type C-1.
Spin 3/2 isotopes with an even number of Pprotons and odd number of Nprotons are Type RC-1. Of the 106 selected isotopes, 12 are Type RC-1.
Spin 5/2 isotopes with an odd number of Pprotons and even number of Nprotons are Type C-2. Of the 106 selected isotopes, 13 are Type C-2.
Spin 5/2 isotopes with an even number of Pprotons and odd number of Nprotons are Type RC-2. Of the 106 selected isotopes, 11 are Type RC-2.
Spin 3 isotopes with an odd number of Pprotons and an odd number of Nprotons are Type B-3. Of the 106 selected isotopes, 2 are Type B-3.
Spin 7/2 isotopes with an odd number of Pprotons and even number of Nprotons are Type A-3.
Of the 106 selected isotopes, 9 are Type A-3.
Spin 7/2 isotopes with an even number of Pprotons and odd number of Nprotons are Type RA-3. Of the 106 selected isotopes, 8 are Type RA-3.
Spin 9/2 isotopes with an odd number of Pprotons and even number of Nprotons are Type C-4. Of the 106 selected isotopes, 3 are Type C-4.
Spin 9/2 isotopes with an even number of Pprotons and odd number of Nprotons are Type RC-4. Of the 106 selected isotopes, 4 are Type RC-4.
Above, the horizontal line is in the inspection plane. The vertical line, the photon path to the Rabi analyzer, is parallel to the constant magnetic field. The circle indicates the diameter of the molecular beam, and the crosshairs indicate the velocity of the beam is directed into the paper.
A molecular beam is not needed for the operation of a Rabi analyzer, all that is required is for an analytical sample (gas or liquid phase) comprising a large number of molecules containing a larger number of nuclei enclosing an even larger number of particles to be located at the intersection of the cross hairs.
The position of the horizontal inspection plane is irrelevant to Rabi analysis but it is crucial for spectroscopic analysis of flux loops.
Above left, the molecular beam (directed into the paper in the previous illustration) is directed from right to left, and the photon path to the Rabi analyzer is in the same location as in the previous illustration.
For spectroscopic analysis, the inspection plane is the plane defined by the direction the molecular beam formerly passed and the direction of the positive electric field when pointing up.
Above right, the inspection plane for spectroscopic analysis, is labelled at each corner. The dashed line in place of the former position of the molecular beam is an orthogonal axis (OA) passing through the direction of the positive side of the electric field when pointing up (UP),
and passing through the direction of the spectroscopic detectors (SD).
The intersection of OA, UP and SD is the location where the analytical sample (gas or liquid phase) is placed in the inspection plane. The electric field that orients particle Q-axes is in the inspection plane.
The detection of ten of the twelve Types of magnetic signatures (in the 106 selected isotopes) requires one of four alignments of directed electric fields: the positive side of the electric field pointing up, the positive side of the electric field pointing right, the positive side of the electric field pointing down, or the positive side of the electric field pointing left.
The four possible alignments of the electric field are illustrated on either side of the inspection plane (but in operation the entire breadth of the electric field points in the same direction) and the directed lines on the edges of the inspection plane represent the positions of thin wire cathodes that produce planar electric fields.
Prior to an operational run, the spectroscopic detectors are adjusted to optimally detect the magnetic properties of the isotope to be analyzed.
Above is a summary of isotope magnetic signatures.
Column 1 lists the twelve magnetic isotope Types.
In column 2, with the P-axes of particles oriented by a constant magnetic field directed up in the direction of the magnetic north pole and in the absence of a directed electric field, the magnetic signatures due to flipping odd Pproton P-axes (the arrow on the left of the vignette) and odd Nproton P-axes (the arrow on the right of the vignette) are illustrated.
See below, in the detailed discussion of Type B-1, for the reason there is a zero instead of an arrow in Types B-1 and B-3.
The magnetic signatures due to flux loops in the presence of the four orientations of an electric field, are given in columns 3, 4, 5 and 6 for electric fields directed up, directed down, directed to the right, or directed to the left.
In illustrations of flux loop magnetic signatures, if the arrows are oriented up and down the arrow on the left of the vignette represents the direction of Pproton flux loops and the arrow on the right represents the direction of Nproton flux loops, if the arrows are oriented left and right the arrow on the top of the vignette represents the direction of Pproton flux loops and the arrow on the bottom represents the direction of Nproton flux loops.
In total there are six directed orthogonal planes in Cartesian space but only four of these are represented in columns 3, 4, 5 and 6. This omission is due to the elliptical planar shape of magnetic flux loops: the missing orientations provide edge-on views without detectable magnetic signatures.
Type A-0
7N15, with 7 Pprotons and 8 Nprotons, is the lowest atomic number Type A-0 isotope. In Type A-0 isotopes the flux loops associated with Pprotons and Nprotons lie in a directed Cartesian plane without detectable flux loop signatures.
In an analytical sample, 50% of the odd (unpaired) Pproton P-axes will be oriented in one direction and 50% in the opposite direction. The orientation of the magnetic axes of the odd Pproton are flipped by the transverse oscillating magnetic field and the spectroscopic detectors sense two different magnetic signatures resulting in two peaks corresponding to a spin of ½.
Above is the magnetic signature of Type A-0. The left arrow pointing up is the direction of the odd Pproton P-axis after emission of a photon (previously the constant magnetic field aligned the Pproton P-axis in this orientation, then absorption of intrinsic energy from the transverse oscillating magnetic field flipped the axis to pointing down then, due to the 180 degree rotation of the P-Q axes with respect to the direction of the G-axis, the absorbed intrinsic energy was released as a photon when the axis was flipped back to pointing up). The arrow pointing down is the antiparallel direction of the P-axis of a paired Nproton (which does not emit a photon).
The experimental detection of Type A-0 isotopes requires a constant magnetic field oriented in the direction of magnetic north.
Type RA-0
6C13, with 6 Pprotons and 7 Nprotons, is the lowest atomic number Type RA-0 isotope. In Type RA-0 isotopes the flux loops associated with Pprotons and Nprotons lie in a directed Cartesian plane without detectable flux loop signatures.
In an analytical sample, 50% of the odd (unpaired) Nproton P-axes will be oriented in one direction and 50% in the opposite direction. The orientation of the magnetic axes of the odd Nproton are flipped by the transverse oscillating magnetic field and the spectroscopic detectors produce two different magnetic signatures resulting in two peaks corresponding to a spin of ½.
Above is the magnetic signature of Type RA-0. The left arrow pointing up is the direction of the P-axis of a paired Pproton (which does not emit a photon). The right arrow pointing down is the direction of the odd Nproton P-axis after emission of a photon (previously the constant magnetic field aligned the Nproton P-axis in this orientation, then absorption of intrinsic energy from the transverse oscillating magnetic field flipped the axis to pointing up then, due to the 180 degree rotation of the P-Q axes with respect to the direction of the G-axis, the absorbed intrinsic energy was released as a photon when the axis was flipped back to pointing down).
The experimental detection of Type RA-0 isotopes requires a constant magnetic field oriented in the direction of magnetic north.
Type B-1
3Li6, with 3 Pprotons and 3 Nprotons, is the lowest atomic number Type B-1 isotope. In isotopes with an odd number of Pprotons and Nprotons, the odd Pproton interacts with the electron in the odd Nproton preventing electron-Nproton decoupling by the constant magnetic field and the odd Nproton P-axis is unable to be flipped by the transverse oscillating magnetic field, but the electron-Pprotonis decoupled and the orientation of the odd Pproton magnetic axis is flipped by the transverse oscillating magnetic field, and the spectroscopic detectors, adjusted to optimally recognize the magnetic signatures of 3Li6, sense one distinctive magnetic signature, resulting in one peak.
In Type B-1, the odd Nproton P-axis is unable to be flipped thus there is no magnetic signature due to the Nproton itself, but both the Nproton and the Pproton have associated flux loops and spectroscopic detectors can sense the magnetic signatures of the flux loops in the presence of a directed electric field pointing up.
In the analysis of isotopes with detectable flux loop signatures there are four possible orientations of the directed electric fields. The magnetic flux loops associated with Type-1 isotopes are detectable if the directed electric field is pointing up. The magnetic flux loops associated with Type-2 isotopes are detectable if the directed electric field is pointing down. The magnetic flux loops associated with Type-3 isotopes are detectable if the directed electric field is pointing right. The magnetic flux loops associated with Type-4 isotopes are detectable if the directed electric field is pointing left.
Each of these directed electric field orientations require different experiments, therefore the results of five experiments (including one experiment without directed electric fields) are needed to fully establish the Type of an unknown isotope.
The flux loops circulating through particle P-axes can pass through all radial planes. The radial flux planes in the above diagram are in the plane of the paper demonstrating, when detected from opposite directions, flux loops will be CW (directed right-left) or CCW (directed left-right).
Since Pprotons and Nprotons are oppositely aligned, a CW Pproton signature is identical to an Nproton CCW signature, and a CCW Pproton signature is identical to an Nproton CW signature.
Because the magnetic signatures of the particles in the field of view of a detector are differently oriented, on average 50% of the flux loop magnetic signatures will be CW and 50% CCW. Of the 50% of the CW signatures 25% will be due to Pprotons and 25% due to Nprotons, and of the 50% of the CCW signatures 25% will be due to Pprotons and 25% due to Nprotons.
Thus, there will be two different magnetic signatures resulting to two peaks, but we are unable to distinguish which is due to CW Pproton flux loops or CCW Nproton flux loops, and which is due to CCW Pproton flux loops or CW Nproton flux loops.
In Type B-1, the magnetic signature due to the odd Pproton (experimentally determined in the absence of an electric field) has one peak, and the magnetic signature due to flux loops associated with Pprotons and Nprotons (experimentally determined in an electric field oriented parallel to the magnetic field) has two peaks, totaling three peaks corresponding to a spin of 1.
Here we come to a fundamental issue. Is the uncertainty in situations involving linked physical properties (complementarity) described by probability or is it causedby probability? In 1925 Werner Heisenberg theorized this type of uncertainty was caused by probability and that opinion became, along with intrinsic spin, an important part of the foundation of new quantum theory.
In nature, the orientation of the magnetic signatures of isotopes and the orientation of the nuclei containing the particles responsible for the magnetic signatures are random. The magnetic signatures due to a large number of randomly oriented particles are indistinguishable from background noise, but under the proper experimental conditions, the magnetic signatures are discernable.
The magnetic signatures of flux loops, imperceptible in nature, are perceptible when the Q-axes of the associated particles are aligned.
A constant magnetic field is not needed to detect the magnetic signatures of flux loops, but compared to the Rabi analyzer the inspection plane to detect the magnetic signatures of flux loops is in the identical position, and the directed orthogonal plane pointing up in the direction of magnetic north in the Rabi analyzer is the identical to the directed orthogonal plane pointing up in the direction of the positive electric field in the flux loops analyzer, that is, the direction of the electric field is parallel to the magnetic field.
Therefore, even though the magnetic field is not needed to detect the magnetic signatures of flux loops, if the magnetic field is present in addition to the directed electric field, its presence would not alter the experimental results, but it might provide additional information.
Here is a prediction of the present theory. If the experiment detecting the magnetic signature of Type B-1 is conducted in the presence of a constant magnetic field and a directed electric field pointing up, that one experiment will determine the magnetic signatures shown above plus two additional signatures: (1) the magnetic signature due to CW Pproton flux loops and CCW Nproton flux loops and (2) the magnetic signature due to CW Nproton flux loops and CCW Pproton flux loops.
This result would demonstrate the uncertainty in at least one situation involving linked physical properties is described by probability but is not causedby probability. This and other experiments yet to be devised, will overturn the concept of causation by probability, and validate Einstein’s intuition that God “does not play dice with the universe.”17
Type C-1
3Li7, with 3 Pprotons and 4 Nprotons, is the lowest atomic number Type C-1 isotope.
As in Type A-0, in a constant magnetic field absent electric fields the magnetic signature due to an odd particle has two peaks. As in Type B-1, the magnetic signature due to flux loops in a directed electric field pointing up has two peaks. In total, Type C-1 isotopes have four peaks corresponding to a spin of 3/2.
Type RC-1
4Be9, with 4 Pprotons and 5 Nprotons, is the lowest atomic number RC-1 isotope.
As in Type RA-0, in a constant magnetic field absent electric fields the magnetic signature due to an odd particle has two peaks. As in Type B-1, the magnetic signature due to flux loops in a directed electric field pointing up has two peaks. In total, Type RC-1 isotopes have four peaks corresponding to a spin of 3/2.
Type C-2
13Al27, with 13 Pprotons and 14 Nprotons, is the lowest atomic number Type C-2 isotope.
As in Type A-0, in a constant magnetic field absent electric fields the magnetic signature due to an odd particle has two peaks. As in Type B-1, the magnetic signature due to flux loops in a directed electric field pointing up has two peaks.
In the identification of Type C-2, the flux loops of an odd particle, determined in an electric field pointing down, has two peaks. In total, Type C-2 isotopes have six peaks corresponding to a spin of 5/2.
Type RC-2
8O17, with 8 Pprotons and 9 Nprotons, is the lowest atomic number Type RC-2 isotope. 8O17 has one odd Nproton and no odd Pprotons.
As in Type RA-0, in a constant magnetic field absent electric fields the magnetic signature due to an odd particle has two peaks. As in Type B-1, the magnetic signature due to flux loops in a directed electric field pointing up has two peaks.
In the identification of Type RC-2, the flux loops of an odd particle, determined in an electric field pointing down, has two peaks. In total, Type RC-2 isotopes have six peaks corresponding to a spin of 5/2.
Type B-3
5B10, with 5 Pprotons and 5 Nprotons, is the lowest atomic number Type B-3 isotope.
As in Type A-0, in a constant magnetic field absent electric fields the magnetic signature due to an odd particle has two peaks. As in Type B-1, the magnetic signature due to flux loops in a directed electric field pointing up has two peaks. As in Type C-2, the flux loops of an odd particle, determined in an electric field pointing down, has two peaks.
In the identification of Type B-3, the odd Pproton flux loops, determined in an electric field pointing right, has two peaks. In total, Type B-3 isotopes have seven peaks corresponding to a spin of 3.
A-3
21Sc45, with 21 Pprotons and 24 Nprotons, is the lowest atomic number Type A-3 isotope.
As in Type A-0, in a constant magnetic field absent electric fields the magnetic signature due to an odd particle has two peaks. As in Type B-1, the magnetic signature due to flux loops in a directed electric field pointing up has two peaks. As in Type C-2, the flux loops of an odd particle, determined in an electric field pointing down, has two peaks. As in Type B-3, the magnetic signature due to flux loops in a directed electric field pointing right, has two peaks. In total, Type A-3 isotopes have eight peaks corresponding to a spin of 7/2.
RA-3
20Ca43, with 20 Pprotons and 23 Nprotons, is the lowest atomic number Type RA-3 isotope.
As in Type RA-0, in a constant magnetic field absent electric fields the magnetic signature due to an odd particle has two peaks. As in Type B-1, the magnetic signature due to flux loops in a directed electric field pointing up has two peaks. As in Type RC-2, the flux loops of an odd particle, determined in an electric field pointing down, has two peaks. As in Type B-3, the magnetic signature due to flux loops in a directed electric field pointing right, has two peaks. In total, Type RA-3 isotopes have 8 peaks corresponding to a spin of 7/2.
C-4
41NB93, with 41 Pprotons and 52 Nprotons, is the lowest atomic number Type C-4 isotope.
As in Type A-0, in a constant magnetic field absent electric fields the magnetic signature due to an odd particle has two peaks. As in Type B-1, the magnetic signature due to flux loops in a directed electric field pointing up has two peaks. As in Type C-2, the flux loops of an odd particle, determined in an electric field pointing down, has two peaks. As in Type B-3, the magnetic signature due to flux loops in a directed electric field pointing right, has two peaks. In the identification of Type C-4, the odd Nproton flux loops, determined in an electric field pointing left, has two peaks. In total, Type C-4 isotopes have 10 peaks corresponding to a spin of 9/2.
RC-4
32Ge73, with 32 Pprotons, 41 Nprotons, is the lowest atomic number Type RC-4 isotope.
As in Type RA-0, in a constant magnetic field absent electric fields the magnetic signature due to an odd particle has two peaks. As in Type B-1, the magnetic signature due to flux loops in a directed electric field pointing up has two peaks. As in Type RC-2, the flux loops of an odd particle, determined in an electric field pointing down, has two peaks. As in Type B-3, the magnetic signature due to flux loops in a directed electric field pointing right, has two peaks. In the identification of Type RC-4, the odd Nproton flux loops, determined in an electric field pointing left, has two peaks. In total, Type RC-4 isotopes have 10 peaks corresponding to a spin of 9/2.
Z
N
Z+N
Spin
Peaks
Type
7N15
7
8
15
0.5
2
A-0
9F19
9
10
19
0.5
2
A-0
15P31
15
16
31
0.5
2
A-0
39Y89
39
50
89
0.5
2
A-0
45Rh103
45
58
103
0.5
2
A-0
47Ag109
47
62
109
0.5
2
A-0
47Ag107
47
60
107
0.5
2
A-0
69Tm169
69
100
169
0.5
2
A-0
81Tl203
81
122
203
0.5
2
A-0
81Tl205
81
124
205
0.5
2
A-0
6C13
6
7
13
0.5
2
RA-0
14Si29
14
15
29
0.5
2
RA-0
26Fe57
26
31
57
0.5
2
RA-0
34Se77
34
43
77
0.5
2
RA-0
48Cd111
48
63
111
0.5
2
RA-0
50Sn117
50
67
117
0.5
2
RA-0
50Sn115
50
65
115
0.5
2
RA-0
52Te125
52
73
125
0.5
2
RA-0
54Xe129
54
75
129
0.5
2
RA-0
74W183
74
109
183
0.5
2
RA-0
76Os187
76
111
187
0.5
2
RA-0
78Pt195
78
117
195
0.5
2
RA-0
80Hg199
80
119
199
0.5
2
RA-0
82Pb207
82
125
207
0.5
2
RA-0
3Li6
3
3
6
1.0
3
B-1
7N14
7
7
14
1.0
3
B-1
3Li7
3
4
7
1.5
4
C-1
5B11
5
6
11
1.5
4
C-1
11Na23
11
12
23
1.5
4
C-1
17Cl35
17
18
35
1.5
4
C-1
17Cl37
17
20
37
1.5
4
C-1
19K39
19
20
39
1.5
4
C-1
19K41
19
22
41
1.5
4
C-1
29Cu63
29
34
63
1.5
4
C-1
29Cu65
29
36
65
1.5
4
C-1
31Ga69
31
38
69
1.5
4
C-1
31Ga71
31
40
71
1.5
4
C-1
33As75
33
42
75
1.5
4
C-1
35Br79
35
44
79
1.5
4
C-1
35Br81
35
46
81
1.5
4
C-1
65Tb159
65
94
159
1.5
4
C-1
77Ir193
77
116
193
1.5
4
C-1
77Ir191
77
114
191
1.5
4
C-1
79Au197
79
118
197
1.5
4
C-1
4Be9
4
5
9
1.5
4
RC-1
10Ne21
10
11
21
1.5
4
RC-1
16S33
16
17
33
1.5
4
RC-1
24Cr53
24
29
53
1.5
4
RC-1
28Ni61
28
33
61
1.5
4
RC-1
54Xe131
54
77
131
1.5
4
RC-1
56Ba135
56
79
135
1.5
4
RC-1
56Ba137
56
81
137
1.5
4
RC-1
64Gd155
64
91
155
1.5
4
RC-1
64Gd157
64
93
157
1.5
4
RC-1
76Os189
76
113
189
1.5
4
RC-1
80Hg201
80
121
201
1.5
4
RC-1
13Al27
13
14
27
2.5
6
C-2
25Mn51
25
26
51
2.5
6
C-2
25Mn55
25
30
55
2.5
6
C-2
37Rb85
37
48
85
2.5
6
C-2
51Sb121
51
70
121
2.5
6
C-2
53I127
53
74
127
2.5
6
C-2
59Pr141
59
82
141
2.5
6
C-2
61Pm145
61
84
145
2.5
6
C-2
63Eu151
63
88
151
2.5
6
C-2
63Eu153
63
90
153
2.5
6
C-2
75Re185
75
110
185
2.5
6
C-2
8O17
8
9
17
2.5
6
RC-2
12Mg25
12
13
25
2.5
6
RC-2
22Ti47
22
25
47
2.5
6
RC-2
30Zn67
30
37
67
2.5
6
RC-2
40Zr91
40
51
91
2.5
6
RC-2
42Mo95
42
53
95
2.5
6
RC-2
42Mo97
42
55
97
2.5
6
RC-2
44Ru101
44
57
101
2.5
6
RC-2
44Ru99
44
55
99
2.5
6
RC-2
46Pd105
46
59
105
2.5
6
RC-2
66Dy161
66
95
161
2.5
6
RC-2
66Dy163
66
97
163
2.5
6
RC-2
70Yb173
70
103
173
2.5
6
RC-2
5B10
5
5
10
3.0
7
B-3
11Na22
11
11
22
3.0
7
B-3
21Sc45
21
24
45
3.5
8
A-3
23V51
23
28
51
3.5
8
A-3
27Co59
27
32
59
3.5
8
A-3
51Sb123
51
72
123
3.5
8
A-3
55Cs133
55
78
133
3.5
8
A-3
57La139
57
82
139
3.5
8
A-3
67Ho165
67
98
165
3.5
8
A-3
71Lu175
71
104
175
3.5
8
A-3
73Ta181
73
108
181
3.5
8
A-3
20Ca43
20
23
43
3.5
8
RA-3
22Ti49
22
27
49
3.5
8
RA-3
60Nd143
60
83
143
3.5
8
RA-3
60Nd145
60
85
145
3.5
8
RA-3
62Sm149
62
87
149
3.5
8
RA-3
68Er167
68
99
167
3.5
8
RA-3
72Hf177
72
105
177
3.5
8
RA-3
92U235
92
143
235
3.5
8
RA-3
41Nb93
41
52
93
4.5
10
C-4
49In113
49
64
113
4.5
10
C-4
83Bi209
83
126
209
4.5
10
C-4
32Ge73
32
41
73
4.5
10
RC-4
36Kr83
36
47
83
4.5
10
RC-4
38Sr87
38
49
87
4.5
10
RC-4
72Hf179
72
107
179
4.5
10
RC-4
Z
N
Z+N
Spin
Peaks
Type
3Li6
3
3
6
1.0
3
B-1
3Li7
3
4
7
1.5
4
C-1
4Be9
4
5
9
1.5
4
RC-1
5B10
5
5
10
3.0
7
B-3
5B11
5
6
11
1.5
4
C-1
6C13
6
7
13
0.5
2
RA-0
7N14
7
7
14
1.0
3
B-1
7N15
7
8
15
0.5
2
A-0
8O17
8
9
17
2.5
6
RC-2
9F19
9
10
19
0.5
2
A-0
10Ne21
10
11
21
1.5
4
RC-1
11Na23
11
12
23
1.5
4
C-1
11Na22
11
11
22
3.0
7
B-3
12Mg25
12
13
25
2.5
6
RC-2
13Al27
13
14
27
2.5
6
C-2
14Si29
14
15
29
0.5
2
RA-0
15P31
15
16
31
0.5
2
A-0
16S33
16
17
33
1.5
4
RC-1
17Cl35
17
18
35
1.5
4
C-1
17Cl37
17
20
37
1.5
4
C-1
19K39
19
20
39
1.5
4
C-1
19K41
19
22
41
1.5
4
C-1
20Ca43
20
23
43
3.5
8
RA-3
21Sc45
21
24
45
3.5
8
A-3
22Ti47
22
25
47
2.5
6
RC-2
22Ti49
22
27
49
3.5
8
RA-3
23V51
23
28
51
3.5
8
A-3
24Cr53
24
29
53
1.5
4
RC-1
25Mn51
25
26
51
2.5
6
C-2
25Mn55
25
30
55
2.5
6
C-2
26Fe57
26
31
57
0.5
2
RA-0
27Co59
27
32
59
3.5
8
A-3
28Ni61
28
33
61
1.5
4
RC-1
29Cu63
29
34
63
1.5
4
C-1
29Cu65
29
36
65
1.5
4
C-1
30Zn67
30
37
67
2.5
6
RC-2
31Ga69
31
38
69
1.5
4
C-1
31Ga71
31
40
71
1.5
4
C-1
32Ge73
32
41
73
4.5
10
RC-4
33As75
33
42
75
1.5
4
C-1
34Se77
34
43
77
0.5
2
RA-0
35Br79
35
44
79
1.5
4
C-1
35Br81
35
46
81
1.5
4
C-1
36Kr83
36
47
83
4.5
10
RC-4
37Rb85
37
48
85
2.5
6
C-2
38Sr87
38
49
87
4.5
10
RC-4
39Y89
39
50
89
0.5
2
A-0
40Zr91
40
51
91
2.5
6
RC-2
41Nb93
41
52
93
4.5
10
C-4
42Mo95
42
53
95
2.5
6
RC-2
42Mo97
42
55
97
2.5
6
RC-2
44Ru101
44
57
101
2.5
6
RC-2
44Ru99
44
55
99
2.5
6
RC-2
45Rh103
45
58
103
0.5
2
A-0
46Pd105
46
59
105
2.5
6
RC-2
47Ag107
47
60
107
0.5
2
A-0
47Ag109
47
62
109
0.5
2
A-0
48Cd111
48
63
111
0.5
2
RA-0
49In113
49
64
113
4.5
10
C-4
50Sn115
50
65
115
0.5
2
RA-0
50Sn117
50
67
117
0.5
2
RA-0
51Sb121
51
70
121
2.5
6
C-2
51Sb123
51
72
123
3.5
8
A-3
52Te125
52
73
125
0.5
2
RA-0
53I127
53
74
127
2.5
6
C-2
54Xe129
54
75
129
0.5
2
RA-0
54Xe131
54
77
131
1.5
4
RC-1
55Cs133
55
78
133
3.5
8
A-3
56Ba135
56
79
135
1.5
4
RC-1
56Ba137
56
81
137
1.5
4
RC-1
57La139
57
82
139
3.5
8
A-3
59Pr141
59
82
141
2.5
6
C-2
60Nd143
60
83
143
3.5
8
RA-3
60Nd145
60
85
145
3.5
8
RA-3
61Pm145
61
84
145
2.5
6
C-2
62Sm149
62
87
149
3.5
8
RA-3
63Eu151
63
88
151
2.5
6
C-2
63Eu153
63
90
153
2.5
6
C-2
64Gd155
64
91
155
1.5
4
RC-1
64Gd157
64
93
157
1.5
4
RC-1
65Tb159
65
94
159
1.5
4
C-1
66Dy161
66
95
161
2.5
6
RC-2
66Dy163
66
97
163
2.5
6
RC-2
67Ho165
67
98
165
3.5
8
A-3
68Er167
68
99
167
3.5
8
RA-3
69Tm169
69
100
169
0.5
2
A-0
70Yb173
70
103
173
2.5
6
RC-2
71Lu175
71
104
175
3.5
8
A-3
72Hf177
72
105
177
3.5
8
RA-3
72Hf179
72
107
179
4.5
10
RC-4
73Ta181
73
108
181
3.5
8
A-3
74W183
74
109
183
0.5
2
RA-0
75Re185
75
110
185
2.5
6
C-2
76Os187
76
111
187
0.5
2
RA-0
76Os189
76
113
189
1.5
4
RC-1
77Ir191
77
114
191
1.5
4
C-1
77Ir193
77
116
193
1.5
4
C-1
78Pt195
78
117
195
0.5
2
RA-0
79Au197
79
118
197
1.5
4
C-1
80Hg199
80
119
199
0.5
2
RA-0
80Hg201
80
121
201
1.5
4
RC-1
81Tl203
81
122
203
0.5
2
A-0
81Tl205
81
124
205
0.5
2
A-0
82Pb207
82
125
207
0.5
2
RA-0
83Bi209
83
126
209
4.5
10
C-4
92U235
92
143
235
3.5
8
RA-3
In GAP, the gyromagnetic ratio of a nucleus is equal to the product of the INDC isotope g-factor and the CODATA nuclear magneton divided by the product of the INDC intrinsic spin and the CODATA reduced Plank constant, and the magnetic moment of a nucleus is equal to the product of the INDC isotope g-factor and the CODATA nuclear magneton.
In discrete physics, the magnetic moment of a nucleus is equal the product of two times the interactional spin (converts spin to number of odd Pprotons and/or odd Nprotons), the kinetic steric factor (converts molecular beam thermal energy into Joules), Lambda-bar, and the GAP value for the gyromagnetic ratio (assumed correct).
In the 106 isotopes tested, the ratio of the INDC isotope magnetic moment divided by the value denominated in discrete units is equal to 1.0288816.
The difference can be narrowed by adjustment but cannot be eliminated because CODATA constants are not exactly reconciled.
Part Four
Particle acceleration
Einstein believed mass was constant and many of his revolutionary discoveries were based on that concept. Constancy of mass is an eminently reasonable assumption because Newtonian equations are also founded on mass conservation and in the majority of situations his equations accurately predict the observables. But in fact, as was succinctly expressed in his letter to Richard Bentley, his equations do not correspond to physical reality.18
Einstein also believed the speed of light was constant and since kinetic energy is proportional to mass and velocity, he concluded that the mass of a particle increases with velocity and approaches (but never reaches) a maximum value as the velocity approaches the speed of light. In special relativity he was able to derive, in a few simple equations, the relativistic momentum and energy (mass-energy) of a particle.
In general relativity, Einstein’s field equations described the curvature of space-time in intense gravitational fields in agreement with the measured value for the precession of the perihelion of mercury. It seems likely the field equations were derived with that result in mind. Even so, this approach is eminently justifiable because measurables are valid assumptions for a physical theory.
Einstein’s prediction that the curvature of space-time in intense gravitational fields was not only responsible for the precession of the perihelion of mercury but would also bend rays of light was verified in two astronomical expeditions led by Arthur Eddington and Andrew Crommelin. Their observations were acclaimed as verification of general relativity and today the curvature of space-time is considered by most scientists to be undisputed.
Unfortunately, this undisputed theory cannot determine the velocity of a relativistically accelerated electron or proton and does not provide a mechanism for the increase in energy and mass (mass-energy).
The present theory derives the velocity and mass-energy of accelerated electrons and protons, and provides a mechanism.
In particle acceleration, charged particles are electrostatically formed into a linear beam and accelerated, then injected into a circular accelerator (or cyclotron) where they are magnetically formed into a circular beam and further accelerated by oscillating magnetic fields. Particle acceleration in linear and circular beams is mediated by chirality meshing interactions.
An electrostatic voltage is the emission of quantons:
In electrostatic acceleration of negatively charged particles between a negative cathode on the left emitting CCW quantons and a positive anode on the right emitting CW quantons, chirality meshing absorptions of CCW quantons results in repulsive deflections (voltage acceleration) to the right and chirality meshing absorptions of CCW quantons results in attractive deflections (voltage acceleration) to the right.
If positively charged particles are between a negative cathode on the left emitting CCW quantons and a positive anode on the right emitting CW quantons, chirality meshing absorptions of CW quantons results in attractive deflections (voltage acceleration) to the left and chirality meshing absorptions of CW quantons results in repulsive deflections (voltage acceleration) to the left.
Quantons are also produced transverse to a magnetic field with CCW quantons emitted by the magnetic North pole and CW quantons emitted by the magnetic South pole:
In acceleration by a transverse oscillating magnetic field, charged particles are alternately pushed (repulsively deflected) from one direction and pulled (attractively deflected) from the opposite direction.
Negatively charged particles are alternately pushed (deflected in the direction of the positive anode) due to the absorption of CCW quantons and pulled (deflected in the direction of the positive anode) due to the absorption of CW quantons.
Positively charged particles are alternately pulled (deflected in the direction of the negative cathode) due to the absorption of CCW quantons, and pushed (deflected in the direction of the negative cathode) due to the absorption of CW quantons.
In either case (electrostatic voltage or oscillating magnetic voltage) the energy of simultaneous acceleration by oppositely directed voltages is proportional to the square of the voltage.
A chirality meshing absorption of a quanton increases the intrinsic energy of a particle and produces an intrinsic deflection that increases the particle velocity. Like kinetic acceleration, an intrinsic deflection increases the velocity but does so without the dissipation of kinetic energy.
The number of particles and quantons is directly proportional to the intrinsic Josephson constant: 3.0000E15 quantons are absorbed by 3.0000E15 particles per second per Volt. At 400 Volts 1.2000E18 quantons are absorbed by 1.2000E18 particles per second; and at 250,000 Volts 7.5000E20 quantons are absorbed by 7.5000E20 particles per second.
Each quanton absorption produces a deflection (acceleration) equal to the square root of Lambda-bar divided by the particle amplitude. Quanton absorption by an electron produces a deflection of 2.5327E-18 meters, and quanton absorption by a proton produces a deflection of 2.0680E-19 meters.
The number of chirality meshing interactions is equal to the square of the voltage divided by the square root of Lambda-bar. The intrinsic energy absorbed by a particle in a chirality meshing interaction is equal to the product of the number of chirality meshing interactions and Lambda-bar, divided by the number of particles. The accelerated particle intrinsic energy is equal to the sum of the particle intrinsic energy plus the intrinsic energy absorbed by the particle in a chirality meshing interaction.
The kinetic mass-energy in units of Joule is equal to the product of the accelerated particle intrinsic energy, the square of the photon velocity, and the ratio of the discrete Planck constant divided by Lambda-bar.
Electron acceleration
Below left, the GAP equation for electron velocity due to electrostatic or electromagnetic voltage is equal to the square root of the ratio of the product of 2, the CODATA elementary charge (units of Coulomb) and the voltage, divided by the CODATA electron mass (units of kilogram).
Above right, the discrete equation for electron velocity due to electrostatic or electromagnetic voltage is equal to the square root of the ratio of the product of 2, the charge intrinsic energy and the voltage, divided by the electron intrinsic energy.
The velocity calculated by the GAP equation is higher than the discrete equation by a factor of 1.007697. The difference can be narrowed by adjustment but cannot be eliminated because CODATA constants are not reconciled.
The analysis of electron acceleration includes a range of ten voltages between a minimum voltage and a maximum voltage. The maximum voltage is equal to a few millivolts less than the theoretical voltage required to accelerate an electron to the photon velocity (an impossibility), which, if calculated to fifteen significant digits, is 259807.621135332 Volts.
Top row column 1, the voltages used in this example analysis are 1, 100, 400, 800, 4000, 10000, 25000, 100000, 250000, and 259807.621135 Volts. The highest voltage, calculated to thirteen significant digits, exactly converts to the photon velocity (an impossibility) to eleven significant digits but is less than the photon velocity (the correct result) at 12 significant digits (this is an excellent example of a discretely exact property).
The equations following, calculations for 100 Volts, are identical to the equations for any other of the nine voltages, or for any other range of ten voltages greater than zero and less than the theoretical maximum.
Top row column 2, the calculated electron velocity per the discrete equation.
Top row column 3, the number of accelerated (deflected) electrons is equal to the ratio of the voltage divided by the intrinsic electron magnetic flux quantum.
Top row column 4, the deflection per quanton is equal to the square root of Lambda-bar divided by the electron amplitude.
This is the deflection of a chirality meshing interaction between a quanton and an electron.
Bottom row column 1, the number of chirality meshing interactions is equal to the square of the voltage divided by the square root of Lambda-bar.
Bottom row column 2, the increase in intrinsic energy per electron due to chirality meshing interactions, equal to the product of the number of chirality meshing interactions and Lambda-bar divided by the number of electrons, is denominated in units of Einstein.
Bottom row column 3, the accelerated electron energy is equal to the sum of the electron intrinsic energy and the increase in intrinsic energy per electron.
Bottom row column 4, the mass-energy in units of Joule is equal to the product of the accelerated electron intrinsic energy, the square of the photon velocity and the ratio of the discrete Planck constant divided by Lambda-bar.
Proton acceleration
The analysis of proton acceleration includes a range of ten voltages between a minimum voltage and a maximum voltage. For purposes of comparison, we specify the same voltages as used for the electron.
The theoretical voltage required to accelerate a proton to the photon velocity (an impossibility) is 38971143.1702997 Volts. Any voltage less than this theoretical maximum will accelerate a proton to less than the photon velocity.
The voltage range used in this example analysis is 1, 100, 400, 800, 4000, 10000, 25000, 100000, 250000, and 259807.621135 Volts. The equations below, the calculations for 100 Volts, are identical to the equations for any other accelerating voltage range greater than zero and less than the theoretical maximum.
The analysis of proton acceleration includes a range of ten voltages between a minimum voltage and a maximum voltage. The maximum voltage is equal to a few millivolts less than the theoretical voltage required to accelerate a proton to the photon velocity (an impossibility), which, if calculated to fifteen significant digits, is 259807.621135332 Volts.
Below left, the GAP equation for proton velocity due to electrostatic or electromagnetic voltage is equal to the square root of the ratio of the product of 2, the CODATA elementary charge (units of Coulomb) and the voltage, divided by the CODATA proton mass (units of kilogram).
Above right, the discrete equation for proton velocity, due to electrostatic or electromagnetic voltage, is equal to the square root of the ratio of the product of 2, the charge intrinsic energy (in units of intrinsic Volt) and the voltage, divided by the proton intrinsic energy (in units of Einstein).
The discrete proton velocity is lower than the discrete electron velocity by the square root of 150 (the square root of the proton amplitude).
The equations below, calculations for 100 Volts, are identical to the equations for any other of the nine voltages, or for any other range of ten voltages greater than zero and less than the theoretical maximum.
Top row column 1, the voltages used in this example analysis are 1, 100, 400, 800, 4000, 10000, 25000, 100000, 250000, and 259807.621135 Volts. The highest voltage, calculated to thirteen significant digits, exactly converts to the photon velocity (an impossibility) to eleven significant digits but is less than the photon velocity (the correct result) at 12 significant digits.
The equations following, calculations for 100 Volts, are identical to the equations for any other of the nine voltages, or for any other range of ten voltages greater than zero and less than the theoretical maximum.
Top row column 2, the calculated proton velocity per the discrete equation.
Top row column 3, the number of accelerated (deflected) protons is equal to the ratio of the voltage divided by the intrinsic electron magnetic flux quantum.
Top row column 4, the deflection per quanton is equal to the square root of Lambda-bar divided by the proton amplitude.
This is the deflection of a chirality meshing interaction between a quanton and a proton.
Bottom row column 1, the number of chirality meshing interactions is equal to the square of the voltage divided by the square root of Lambda-bar.
Bottom row column 2, the increase in intrinsic energy per proton due to chirality meshing interactions, equal to the product of the number of chirality meshing interactions and Lambda-bar divided by the number of protons, is denominated in units of Einstein.
Bottom row column 3, the accelerated proton energy is equal to the sum of the intrinsic proton energy and the increase in intrinsic energy per proton.
Bottom row column 4, the mass-energy in units of Joule is equal to the product of the accelerated proton intrinsic energy, the square of the photon velocity and the ratio of the discrete Planck constant divided by Lambda-bar.
Part Five
Atomic Spectra
The Rydberg equations correspond to high accuracy with the hydrogen spectral series and the Newtonian equations correspond to high accuracy with orbital motion but, despite many years of considerable effort, physicists have been unable to account for the spectrum of helium or for non-Newtonian stellar rotation curves.
Previously, we reformulated the Newtonian equations and explained stellar rotation curves. In this chapter we will reformulate the Rydberg equations for the spectral series of hydrogen and derive a general explanation for atomic spectra.
The equation formulated by Johann Balmer in 1885, in which the hydrogen spectrum wave numbers are proportional to the product of a constant and the difference between the inverse square of two integers, is correct, but the Bohr Model is not.
The electron is not a point particle, the electron does not orbit the proton, the force conveyed by an electron is not transmitted an infinite distance, at an infinitesimal distance the force is not infinite, electrons with lower energy and lower wave number are closer to the proton, and electrons with higher energy and higher wave number are further away from the proton (the Bohr distance-energy relationship must be reversed).
In hydrogen an electron and proton are engaged in a positional resonance. In atoms larger than hydrogen many electrons and protons are engaged in positional resonances. Each resonance is between one electron external to the nucleus and one proton internal to the nucleus, in which the electron and the nuclear proton are facing in opposite directions and each particle emits quantons that are absorbed by the other particle. On emission by the electron the quanton is CCW and on emission by the nuclear proton the quanton is CW. On emission the emitting particle recoils by a distance proportional to the particle intrinsic energy and on absorption the absorbing particle is attractively deflected (a chirality meshing interaction) by a distance proportional to the particle intrinsic energy. The result is a sustained positional resonance of a CCW quanton emitted in one direction by the electron and absorbed by the nuclear proton and a CW quanton emitted in the opposite direction by the nuclear proton and absorbed by the electron.
In the hydrogen atom, the resonance can be situated at any one of several quantized positions proportional to energy and corresponding to spectral emission and absorption lines. On emission of a photon the energy of the resonance decreases, and the electron drops to the adjacent lower energy level. On absorption of a photon the energy of the resonance increases, and the electron jumps to the adjacent higher energy level. The highest stable energy level, corresponding to an emission-only line, the maximum electron-proton separation distance beyond which the positional resonance no longer exists, is the hydrogen ionization energy.
The above paragraphs summarize the spectral mechanism which, for the time being, shall be considered a hypothesis.
The intrinsic to kinetic energy factor is equal to the ratio of the discrete Planck constant divided by the Coulomb divided by the ratio of Lambda-bar divided by the charge intrinsic energy, the ratio of the discrete Planck constant divided by the product of Lambda-bar and the square root of the proton amplitude divided by two, and two times the intrinsic steric factor.
The ionization energy of hydrogen (in larger atoms the ionization energy required to remove the last electron) is a discretely exact single value above which the atom no longer exists. The measured energy of hydrogen ionization is 1312 kJ/mol, and the corresponding CRC value is 13.59844 (units of kinetic electron Volts).19 Kinetic electron Volts divided by Omega-2 equals intrinsic Volts (units of Joule), which divided by 12 (the intrinsic to kinetic energy factor) equals intrinsic Volts (units of Einstein), which multiplied by the intrinsic electron charge equals intrinsic energy, which divided by Lambda-bar is equal to the photon frequency of hydrogen ionization.
Working backwards from the calculation sequences above, the discretely exact value of the photon ionization frequency is 3.28000000E15.
The intrinsic energy of hydrogen ionization, denominated in units of Einstein, is equal to the product of the photon frequency and Lambda-bar.
The intrinsic energy of hydrogen ionization, denominated in units of Joule, is equal to the product of the photon frequency and the discrete Planck constant.
The intrinsic voltage of hydrogen ionization, denominated in units of Einstein, is equal to the product of the photon frequency and Lambda-bar, divided by the charge intrinsic energy.
The ratio of the intrinsic voltage of hydrogen ionization divided by Psi is equal to the discrete Rydberg constant and denominated in units of inverse meter (spatial frequency).
The intrinsic voltage of hydrogen ionization, denominated in units of Joule, is equal to the product of 12 (the intrinsic to kinetic energy factor) and the discrete Rydberg constant, and the product of the photon frequency and the discrete Planck constant, divided by the Coulomb.
The kinetic voltage of hydrogen ionization, denominated in units of electron Volt, is equal to the product of the intrinsic voltage of hydrogen ionization and omega-2.
The difference between the above calculated energy of ionization and the CRC value is less than 0.30%. The poor accuracy is due to the performance standards of calorimeters.20 In the measurement of a sample against a calibration standard, a statistical analysis of the results will show the data lie within three standard deviations (sigma-3) of the mean (the expected value) and the accuracy will be 0.15% (99.85% of the measurements will lie in the range of higher than the calibration standard by no more than 0.15% or lower than the calibration standard by no more than 0.15%). If the identical procedure is used without prior knowledge of the expected result and whether the measurement is higher or lower than the actual value is unknown, the accuracy falls to no more than 0.30%.
The calculated value of the kinetic voltage of hydrogen ionization divided by the measured CRC value, expressed as a percentage, is 0.2666%.
Spectral series consist of a number of emission-absorption lines with a lower limit on the left and an upper limit on the right. Both limits are asymptotes: the lower limit corresponds to minimum energy, minimum frequency, and maximum wavelength; and the upper limit corresponds to maximum energy, maximum frequency, and minimum wavelength.
The below diagram of the Lyman spectral series consists of seven black emission-absorption lines to the left and a red emission-only line on the right. From left to right these lines are the Lyman lower limit (Lyman-A), Lyman-B, Lyman-C, Lyman-D, Lyman-E, Lyman-F, Lyman-G, and the Lyman upper limit.
The Rydberg equation expresses the wave numbers of the hydrogen spectrum equal to the product of the discrete Rydberg constant and the difference between the inverse square of the m-index minus the inverse square of the n-index.
The m-index has a constant value for each spectral series within the hydrogen spectrum. The six series ordered by highest energy (at the series upper limit) are Lyman, Balmer, Paschen, Brackett, Pfund and Humphreys.
Each line of a spectral series can be expressed in terms of energy, wave number, wavelength and photon frequency. The energy, wave number, and frequency increase from left to right, but the wavelength decreases from left to right.
For each spectral series the m-index increases from lowest to highest positional energy (Lyman = 1, Balmer = 2, Paschen = 3, Brackett = 4, Pfund = 5, Humphreys = 6). Each spectral series is composed of a sequence of lines (A, B, C, D, E, F, G) in which the n-index is equal to m+1, m+2, m+3, m+4, etc.
In the following analysis we will apply the Rydberg formula to calculate, based on the discretely exact value of the photon ionization frequency of 3.280000E15, the values for energy, wave number and frequency of the six spectral series of hydrogen.
The below calculations begin with the discretely exact values for the Lyman limit photon frequency and the hydrogen ionization energy (intrinsic voltage units of Joule), and the value of the discrete Rydberg constant.
The Lyman upper limit is an emission-only line because at any energy above the Lyman upper limit the hydrogen atom no longer exists. The calculation for the line prior to the Lyman upper limit is based on an n-index equal to 8, but there are additional discernable lines after Lyman-G because the Lyman upper limit is an asymptote. The identical situation holds for the limit of any spectral series.
The spectral series lower limit, the A-line (Lyman-A, Balmer-A, etc.) is also an asymptote and there are additional discernable lines between the C-line and the A-line. The number of lines included in a spectral series analysis is optional, but it is convenient to use the same number of lines in spectral series to be compared.
In this presentation, 8 Lyman and Balmer lines are included because these lines are specified in at least one of the easily available online sources. In the Paschen, Brackett, Pfund and Humphreys spectral series, 6 lines are included because these are also easily available.21
The ratio of the Lyman upper limit divided by the upper limit of another hydrogen spectral series is equal to the square of the m-index of the other series:
The Lyman upper limit divided by the Balmer upper limit is equal to 4.
The Lyman upper limit divided by the Paschen upper limit is equal to 9.
The Lyman upper limit divided by the Brackett upper limit is equal to 16.
The Lyman upper limit divided by the Pfund upper limit is equal to 25.
The Lyman upper limit divided by the Humphreys upper limit is equal to 36.
The ratio of the Lyman spectral series upper limit divided by the Lyman spectral series lower limit is equal to the ratio of the Rydberg wave number calculation for the upper limit divided by the Rydberg wave number calculation for the lower limit.
In all spectral series the Rydberg ratio is equal to the upper limit energy divided by the lower limit energy, the ratio of the upper limit structural frequency divided by the lower limit structural frequency, and the ratio of the lower limit wavelength divided by the upper limit wavelength.
The ratio of the Balmer spectral series upper limit divided by the Balmer spectral series lower limit is equal to the ratio of the Rydberg wave number calculation for the upper limit divided by the Rydberg wave number calculation for the lower limit.
The same calculation is used for the other four hydrogen spectral series:
The ratio of the Paschen spectral series upper limit divided by the Paschen lower limit is equal to 1312/574 (2.285714).
The ratio of the Brackett spectral series upper limit divided by the Brackett lower limit is equal to 25/9 (2.777777).
The ratio of the Pfund spectral series upper limit divided by the Pfund lower limit is equal to 36/11 (3.272727).
The ratio of the Humphreys spectral series upper limit divided by the Humphreys lower limit is equal to 3.769230.
Above, the frequencies under the A, B, C, D, E, F, G-lines and the series limit are the positional structural frequencies, and the transition frequencies between lines (B-A, C-B … F-E, G-F) are the photon emission-absorption frequencies.
The structural frequency of the G-line is equal to the product of the Rydberg calculated wave number and the photon velocity. The energy of the G-line (intrinsic Volts units of Joule) is equal to the product of the structural frequency of the G-line and the Coulomb divided by the discrete Planck constant.
The structural frequency of the F-line is equal to the product of the Rydberg calculated wave number and the photon velocity. The energy of the F-line (intrinsic Volts units of Joule) is equal to the product of the structural frequency of the F-line and the Coulomb divided by the discrete Planck constant.
The photon emission-absorption frequency of the G-F transition is equal to the structural frequency of the G-line minus the structural frequency of the F-line. The energy of the G-F transition (intrinsic Volts units of Joule) is equal to the energy of the G-line minus the energy of the F-line.
The identical process is used to calculate the emission-absorption frequencies and energies for all spectral series.
Note there is no transition frequency or energy between the G-line and the series limit because the series limit is emission-only.
Lyman series transition photons identical to Balmer series photons:
When a Lyman-C positional resonance drops down to Lyman-B, the Lyman-C energy is emitted as two photons: a 11.662222 Vi(J) Lyman-B photon frequency 2.915555E15 and a 0.637777 Vi(J) Lyman C-B photon frequency 1.594444E14. The frequency and wavelength of the transition photon is identical to the Balmer B-A transition photon.
When a Lyman-D positional resonance drops down to Lyman-C, the Lyman-D energy is emitted as two photons: a 12.300000 Vi(J) Lyman-C photon frequency 3.075000E15 and a 0.295200 Vi(J) Lyman D-C photon frequency 7.380000E13. The frequency and wavelength of the transition photon is identical to the Balmer C-B transition photon.
When a Lyman-E positional resonance drops down to Lyman-D, the Lyman-E energy is emitted as two photons: a 12.595200 Vi(J) Lyman-D photon frequency 3.148800E15 and a 0.160356 Vi(J) Lyman E-D photon frequency 4.008888E13. The frequency and wavelength of the transition photon is identical to the Balmer D-C transition photon.
When a Lyman-F positional resonance drops down to Lyman-E, the Lyman-F energy is emitted as two photons: a 12.755555 Vi(J) Lyman-E photon frequency 3.188888E15 and a 0.096689 Vi(J) Lyman F-E photon frequency 2.41723E13. The frequency and wavelength of the transition photon is identical to the Balmer E-D transition photon.
When a Lyman-G positional resonance drops down to Lyman-F, the Lyman-G energy is emitted as two photons: a 12.852245 Vi(J) Lyman-F photon frequency 3.21306E15 and a 0.062755 Vi(J) Lyman G-F photon frequency 1.568878E13. The frequency and wavelength of the transition photon is identical to the Balmer F-E transition photon.
The equivalence of Balmer-A and Lyman series transitions can be extended to the Paschen, Brackett, Pfund and Humphreys series.
The Lyman C-B transition is equal to the energy and frequency of Paschen-A.
The Lyman D-C transition is equal to the energy and frequency of Brackett-A.
The Lyman E-D transition is equal to the energy and frequency of Pfund-A.
The Lyman F-E transition is equal to the energy and frequency of Humphreys-A.
An explanation of atomic spectra begins with the ionization energies.
In atoms with more than one proton, the discretely exact energy (in red) for elemental ionization energy above which the atom no longer exists, is equal to product of the square of the number of protons times the discretely exact value for the hydrogen ionization energy. The intermediate ionization energies (in blue) are equal to the CRC value divided by omega-2.
The ionization frequency is equal to the product of the ionization energy and the Coulomb divided by the discrete Planck constant.
The ionization wave number is equal to the ionization frequency divided by the photon velocity.
The photon wavelength is the inverse of the wave number.
The difference between the calculated and measured value for the hydrogen ionization energy, divided by the difference between the measured wavelength and calculated wavelength for hydrogen ionization is very nearly equal to the difference between the photon velocity and the speed of light.
The difference between these two values, independent of how it is calculated, is a measurement error term of approximately 0.00468%.
The differences between the measured and calculated values for hydrogen are of no concern and, even though the Rydberg equations derive the measurable wavelengths to high accuracy, the explanation requiring the simultaneous emission of two photons is not consistent with the spectral mechanism hypothesis.
The Rydberg explanation for the emission of atomic spectra requires two frequencies:
One frequency is the structural frequency. Structural frequency is proportional to the energy of the positional resonance between an electron and proton (the energy required to hold the electron and proton in the positional resonance).
The photon frequency, equal to the difference between adjacent structural frequencies, is proportional to an ionization energy (the energy required to remove an electron from the positional resonance).
The photon frequency and wavelength are not directly proportional to structural energy and, in atoms larger than hydrogen, cannot be calculated by a Rydberg equation.
Proofs that wavelength and frequency are not directly proportional to energy:
Spectral wavelengths emitted by sources differing greatly in energy, by a discharge tube in the laboratory, by the sun or by the galactic center, are indistinguishable.
In 60 Hertz power transformers the energy of the emitted photons is proportional to the energy of the current (or the magnetic field).
A general explanation for atomic spectra requires an examination of the measured ionization energies and the measured wavelengths of the first four elements larger than hydrogen.
The number of CRC ionization energies (electron Volts in units of kinetic Joule) for each elemental atom larger than hydrogen is equal to the number of nuclear protons; and the number of atomic energies (intrinsic Volts in units of discrete Joule) is also equal to the number of nuclear protons.
While it is true that measured wavelengths are not directly proportional to energy, it is also true that shorter wavelengths are proportional to lower energies and longer wavelengths are proportional to higher energies. For example, ultraviolet photons have shorter wavelengths and lower energies, and visible photons have longer wavelengths and higher energies.
In any atomic spectrum, each measured wavelength corresponds to one specific energy and, in order for each measured wavelength to correspond to one specific energy, the number of wavelengths must either be equal to the number of energies or equal to an integer multiple of the number of energies.
For example, in helium there are two CRC ionization energies (electron Volts in units of kinetic Joule) corresponding to two atomic energies (intrinsic Volts in units of discrete Joule), fourteen measured wavelengths, and one transition between a wavelength proportional to a lower energy and a wavelength proportional to a higher energy.
In the below table, seven lower and seven higher helium atomic energies are in the first row, the measured wavelengths from shortest to longest are in the third row, and the second row is the ratio of the column wavelength divided by the adjacent lower wavelength. This is the definitive test for a transition from a wavelength corresponding to a lower energy to a wavelength corresponding to a higher energy. In the helium atom, the transition wavelength is also detectable by inspection of the previous wavelengths compared to the following wavelengths.
The transitions are less clear in lithium, beryllium, and boron.
In lithium, beryllium and boron the transition wavelengths are not definitively detectable by simple inspection. However, after the higher energy transitions are established by the ratios of the column wavelength divided by the adjacent lower wavelength, the first transition becomes apparent by inspection of the measured wavelengths.
The spectral mechanism hypothesis has been transformed into a general explanation for atomic spectra:
In hydrogen a single electron and proton are engaged in a positional resonance at a discretely exact frequency equal to 3.28E15 Hz. In atoms larger than hydrogen many electrons and protons are engaged in sustained positional resonances, equal to the product of the square of the number of nuclear protons and 3.28E15 Hz, in which CCW quantons are emitted in one direction by electrons and absorbed by nuclear protons, and CW quantons are emitted in the opposite direction by nuclear protons and absorbed by electrons. The positional resonances can be situated at any one of several quantized positions proportional to energy and corresponding to spectral emission and absorption lines. On emission of a photon the energy of the resonance decreases, and the electron drops to a lower energy level. On absorption of a photon the energy of the resonance increases and the electron jumps to a higher energy level.
Part Six
Cosmology
The purpose of this chapter is to disprove cosmic inflation:
The radiated intrinsic energy which drives the resonance of constant photon velocity is converted into units of intrinsic redshift per megaparsec.
A detailed general derivation of intrinsic redshift (applicable to any galaxy) is made.
The final results of the HST Key Project to measure the Hubble Constant are explained by intrinsic redshift.22
The only measurables in the determination of galactic redshifts are the photon wavelength emitted and received in the laboratory, the photon wavelength emitted by a galaxy and received by an observatory, and the ionization energies.
In the following equations Hydrogen-alpha (Balmer-A) wavelengths are used in calculations of intrinsic redshift.
Intrinsic redshift per megaparsec
The photon intrinsic energy radiated per second due to quanton/graviton emissions is equal to the product of 8 and the discrete Planck constant.
The 2015 IAC value for the megaparsec is proportional to the IAC exact SI definition of the astronomical unit (149,597,870,700 m).
The time of flight per megaparsec is equal to one mpc divided by the photon velocity.
The photon intrinsic energy radiated per megaparsec is equal to the product of time of flight per mpc and the photon intrinsic energy radiated per second due to quanton/graviton emissions.
The decrease in photon frequency due to the energy radiated is equal to the photon intrinsic energy radiated per megaparsec divided by the discrete Planck constant.
The increase in photon wavelength due to the photon intrinsic energy radiated is equal to the ratio of the photon velocity divided by decrease in photon frequency.
Note that wavelength and energy are independent thus wavelength cannot be directly determined from energy, but frequency is proportional to energy and the decrease in frequency is proportional to the increase in wavelength.
The intrinsic redshift per megaparsec is equal to the Hydrogen-alpha (Balmer-A) emission wavelength plus the wavelength increase.
General derivation of galactic intrinsic redshift
The distance of the galaxy in units of mpc is that determined by the Hubble Space Telescope Key Project.23 Below, the example calculations are for NGC0300.
The time of flight of photons emitted by NGC0300 is equal to the product of the time of flight per megaparsec and the Hubble Space Telescope Key Project distance of the galaxy.
The photon intrinsic energy radiated by NGC0300 is equal to the product of the time of flight at the distance of NGC0300 and the photon intrinsic energy radiated per second due to quanton/graviton emissions.
The decrease in photon frequency is equal to the photon intrinsic energy radiated by NGC0300 divided by the discrete Planck constant.
The increase in photon wavelength due to the photon intrinsic energy radiated is equal to the ratio of the photon velocity divided by decrease in photon frequency.
The intrinsic redshift at the distance of NGC0300 is equal to the Hydrogen-alpha (Balmer-A) emission wavelength plus the wavelength increase.
Results of the HST Key Project to measure the Hubble Constant
The goal of this massive international project, involving more than fifteen years of effort by hundreds of researchers, was to build an accurate distance scale for Cephied variables and use this information to determine the Hubble constant to an accuracy of 10%.
The inputs to the HST key project were the observed redshifts and the theoretical relativistic expansion rate of cosmic inflation.
In column 2 below, the galactic distances of 22 galaxies in units of mpc are the values determined by the HST Key Project.24
In column 3 below, the galactic distances are expressed in units of meter.
In column 4 below, the time of flight of photons emitted by the galaxy is equal to the distance of the galaxy in meters divided by the photon velocity.
The photon intrinsic energy radiated due to quanton/graviton emissions at the distance of the galaxy is equal to the product of the time of flight of photons emitted by the galaxy and the photon intrinsic energy radiated per second.
The decrease in photon frequency is equal to the photon intrinsic energy radiated by the galaxy divided by the discrete Planck constant.
The increase in photon wavelength due to the photon intrinsic energy radiated is equal to the ratio of the photon velocity divided by decrease in photon frequency.
Above column 5, the intrinsic redshift at the distance of the galaxy is equal to the Hydrogen-alpha (Balmer-A) emission wavelength plus the wavelength increase.
The Hubble parameter for a galaxy, equal to the product of the ratio of 2 omega-2 (converts intrinsic energy to kinetic energy) divided by the time of flight of photons received at the observatory that were emitted by the galaxy, and the ratio of the distance of the galaxy in units of kilometer divided by the distance of the galaxy in units of megaparsec, is denominated in units of km/s per mpc.
The Hubble constant is equal to the sum of the Hubble parameters for the galaxies examined divided by the number of galaxies.
The theory of cosmic inflation has been disproved.
Part Seven
Magnetic levitation and suspension
This chapter was motivated by a video about quantum magnetic levitation and suspension in which superconducting disks containing thin films of YBCO are levitated and suspended on a track composed of neodymium magnet arrays in which a unit array contains four neodymium magnets (two diagonal magnets oriented N→S and the other two S→N).25
An understanding of levitation and suspension by neodymium magnet arrays begins with consideration of the differences between the levitation of a superconducting disk containing thin films of metal oxides and the levitation of thin slice of pyrolytic carbon.
Oxygen is paramagnetic. An oxygen atom is magnetized by the magnetic field of a permanent magnet in the direction of the external magnetic field (for example, a S→N external magnetic field induces a S→N internal field) and reverts to a demagnetized state when the field is removed. The levitation of a superconducting disk requires an array of neodymium magnets and cooling below the critical temperature. In quantum levitation or suspension, the position of the disk is established by holding (pinning) it in the desired location and orientation, and if a pinned disk is forced into a new location and orientation, it remains pinned in the new location.
Carbon is diamagnetic. A carbon atom is magnetized by a magnetic field in the direction opposite to the magnetic field (for example, a N→S external magnetic field induces a S→N internal field) and reverts to a demagnetized state when the field is removed. Magnetic levitation occurs at room temperature, a thin slice of pyrolytic carbon levitates at a fixed distance parallel to the surface of an array of neodymium magnets, and a levitated slice forced closer to the surface springs back to the fixed distance once the force is removed.
In the levitation of pyrolytic carbon, CCW quantons are emitted by a magnetic North pole and CW quantons are emitted by a magnetic South pole (magnetic emission of quantons is discussed in Part Four).
The number of chirality meshing interactions required to exactly oppose the gravitational force on a thin slice of pyrolytic carbon (or any object) is equal to the local gravitational constant of earth divided by the product of the proton amplitude and the square root of Lambda-bar.
In the above equation, the local gravitational constant of earth (as derived in Part One) is equal to 10 meters per second per second and the proton amplitude (also derived in Part One) is equal to 150 and, (as derived in Part Four) the square root of Lambda-bar is the deflection distance (units of meter) of a single chirality meshing interaction between a quanton and an electron.
The above equation is proportional to energy: the higher the energy, the higher the number of chirality meshing interactions, and the higher the levitation distance; the lower the energy, the lower the number of chirality meshing interactions, and the lower the levitation distance.
Pyrolytic carbon is composed of planar sheets of carbon atoms in which a unit cell is composed of a hexagon of carbon atoms joined by double bonds. Carbon atoms are bonded by either lower energy single bonds proportional to the first ionization energy or higher energy double bonds proportional to the second ionization energy. The measured first and second ionization energies of carbon are 1086.5 and 2352.0 (units of kJ/mol)27.
Due to the discretely exact value of PE charge resonance, in carbon (or any elemental atom) the quanton emission-absorption frequency is equal to 3.28E15 Hz.
The quanton emission frequency of a unit cell of pyrolytic carbon is equal to the product of the discretely exact PE charge resonance frequency of 3.28E15 Hz and the ratio of the second ionization energy of carbon divided by the first ionization energy of carbon.
The levitation distance of a thin slice of pyrolytic carbon (in units of mm) is equal to the product of the ratio of quanton emission frequency of a pyrolytic carbon unit cell divided by six (the number of carbon atoms in a unit cell) times 1000 mm/m and the square root of Lambda-bar.
The oxygen atoms in YBCO oxides are bonded by either lower energy single bonds proportional to the first ionization energy or higher energy double bonds proportional to the second ionization energy. The measured first and second ionization energies of oxygen are 1313.9 and 3388.3 (units of kJ/mol).
The three YBCO metallic oxides are composed of low energy single bonds, high energy double bonds, or single and double bonds. In yttrium oxide (Y2O3), a single bond connects each yttrium atom with the inside oxygen, and a double bond connects each yttrium atom with one of the two outside oxygens. In barium oxide (BaO) the two atoms are connected by a double bond. Copper oxide is a mixture of cupric oxide (copper I oxide) in which a single bond connects each of two copper atoms with the oxygen atom, and cuprous oxide (copper II oxide) in which a double bond connects the copper atom with the oxygen atom.
Voltage is the emission of quantons either directly by the Q-axis of an electron or proton or transversely by a magnetic field from which CCW quantons are emitted by the North pole and CW quantons by the South pole.
The mechanism of magnetic levitation or suspension of a superconducting disk is the absorption of quantons, emitted by a neodymium magnet array, in chirality meshing interactions by electrons in the oxygen atoms of superconductingYBCO oxides resulting in repulsive deflections due to CCW quantons (in quantum levitation) and attractive deflections due to CW quantons (in quantum suspension).
The levitation or suspension distance of a superconductingYBCO oxide is higher (the maximum distance) for double bonded oxides and lower (the minimum distance) for single bonded oxides. The initial position of the YBCO disk is established by momentarily holding (pinning) it in the desired location and orientation at some specific distance from the neodymium magnet array.
In each one-hundredth of a second more than 2E14 chirality meshing interactions establishes the intrinsic energy of electrons within the superconducting oxides. At the same time, at any specific distance above or below the neodymium magnet array the number of quanton interactions, inversely proportional to the square of distance, establishes the availability of quantons to be absorbed at that specific distance. The result is an electrical Stable Balance of the electrons in superconducting oxides at specific distances from the neodymium magnet array, analogous to the gravitational Stable Balance of particles in planets at a specific orbital distance from the sun.
This is the mechanism of pinning in YBCO superconducting disks.
The levitation or suspension distance (units of mm) of a single bonded superconductingYBCO oxide is equal to the product of the ratio of the first ionization energy of oxygen divided by itself, the discretely exact PE charge resonance of 3.28E15 Hz, the square root of Lambda-bar, the ratio of the discrete steric factor divided by 1 (single bond), and 1000 (to convert m to mm).
The levitation or suspension distance (units of mm) of a double bonded superconductingYBCO oxide is equal to the product of the ratio of the second ionization energy of oxygen divided by the first ionization energy of oxygen, the discretely exact PE charge resonance of 3.28E15 Hz, the square root of Lambda-bar, the ratio of the discrete steric factor divided by 2 (double bond), and 1000 (to convert m to mm).
1 Original letter from Isaac Newton to Richard Bentley, 189.R.4.47, ff. 7-8, Trinity College Library, Cambridge, UK http://www.newtonproject.ox.ac.uk
2 https://nssdc.gsfc.nasa.gov/planetary/planetfact.html, accessed Dec 24, 2021
3 Urbain Le Verrier, Reports to the Academy of Sciences (Paris), Vol 49 (1859)
4 Clemence G.M. The relativity effect in planetary motions. Reviews of Modern Physics, 1947, 19(4): 361-364.
5 Eric Doolittle, The secular variations of the elements of the orbits of the four inner planets computed for the epoch 1850 GMT, Trans. Am. Phil. Soc. 22, 37(1925).
6 Michael P. Price and William F. Rush, Nonrelativistic contribution to mercury’s perihelion precession. Am. J. Phys. 47(6), June 1979.
7 Wikimedia, by Daderot made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication, location National Museum of Nature and Science, Tokyo, Japan.
8 Illustration from 1908 Chambers’s Twentieth Century Dictionary. Public domain.
9 Wikimedia “Sine and Cosine fundamental relationship to Circle and Helix” author Tdadamemd.
10 By Jordgette – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=9529698
11 By Ebohr1.svg: en:User:Lacatosias, User:Stanneredderivative work: Epzcaw (talk) – Ebohr1.svg, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=15229922
13 O. Stern, Z. fur Physik, 7, 249 (1921), title in English: “A way to experimentally test the directional quantization in the magnetic field”.
14 Ronald G. J. Fraser, Molecular Rays, Cambridge University Press, 1931.
15 The Molecular Beam Resonance Method for Measuring Nuclear Magnetic Moments. II Rabi, S Millman, P Kusch, JR Zacharias – Physical review, 1939 – APS
16 INDC: N. J. Stone 2014. Nuclear Data Section, International Atomic Energy Agency, www-nds.iaea.org/publications
17 “Quantum theory yields much, but it hardly brings us close to the Old One’s secrets. I, in any case, am convinced He does not play dice with the universe.” Letter from Einstein to Max Born (1926).
18 “That gravity should be innate inherent & essential to matter so that one body may act upon another at a distance through a vacuum without the mediation of anything else by & through which their action or force may be conveyed from one to another is to me so great an absurdity that I believe no man who has … any competent faculty of thinking can ever fall into it.” Original letter from Isaac Newton to Richard Bentley, 189.R.4.47, ff. 7-8, Trinity College Library, Cambridge, UK http://www.newtonproject.ox.ac.uk
19 Ionization energies of the elements (data page), https://en.wikipedia.org/
20 How to determine the range of acceptable results for your calorimeter, Bulletin No. 100, Parr Instrument Company, www.parrinst.com.
21 See www.wikipedia.org, www.hyperphysics.com, www.shutterstock.com
22 Final Results from the Hubble Space Telescope Key Project to Measure the Hubble Constant, Astrophysical Journal 0012-376v1, 18 Dec 2000.
23 Page 60, Final Results from the Hubble Space Telescope Key Project to Measure the Hubble Constant, Astrophysical Journal 0012-376v1, 18 Dec 2000.
24 Page 60, Final Results from the Hubble Space Telescope Key Project to Measure the Hubble Constant, Astrophysical Journal 0012-376v1, 18 Dec 2000.
26 This image has been released into the public domain by its creator, Splarka. https://commons.wikimedia.org/wiki/File:Diamagnetic_graphite_levitation.jpg
27 Ionization energies of the elements (data page), https://en.wikipedia.org/
Canada is great at AI development, but what should the country’s first Minister for Artificial Intelligence make his key priorities? University of Waterloo’s Anindya Sen and the C.D Howe Institute’s Rosalie Wyonch offer strong insight — and geek out a bit about the economics-oriented nature of machine learning algorithms.
Take it from our friends at rennlist, Porsche has built some truly remarkable engines over the years. The air-cooled 911/83 engine that powered the 1973 911 2.7L Carrera RS is just one example. But if you were asked to go on and list the ten all-time greatest Porsche engines there is a good chance the list would be dominated by various Mezger engines.
The 12-cylinder found in the Le Mans-winning Porsche 917? That’s a Mezger. The 3.6L flat six in the 996 GT3? That’s a Mezger. The 4.0L in the 997 GT3 RS 4.0? That’s a Mezger.
How about going all the way back to the original 901/911 engine? Yup, that’s a Mezger.
But what is a Mezger engine, and why are they so special? That is what we are going to discuss here today. We have come up with 9 reasons why the Mezger engine is so special. And there is no other place to begin the discussion than the legendary man behind these engines, Hans Mezger.
1. Hans Mezger
A single slide can in no way capture all that the legendary Hans Mezger accomplished. He joined Porsche back in October of 1956. He loved Porsche sports cars, but his first job was working on diesel engine development. In 1960, he began to work on the type 753 flat-eight engine for Porsche’s first Formula 1 car. Soon after he designed the 6-cylinder boxer engine for the 901/911. He was then promoted to the head of race car design. He was responsible for the 917 and the 12-cylinder engine that powered it to Porsche’s first Le Mans victory in 1970. He then was responsible for the turbocharged 917/10 and 917/30 cars that dominated Can Am. He designed and developed the six-cylinder turbo engines for the Type 935 and 936 race cars.
Mezger designed the 1.5L V6 engine known as the TAG Turbo that powered the McLaren Formula 1 cars to championships in 1984, 1985 and 1986. His engines would eventually be found in the most performance-oriented Porsche road cars such as the 996 GT3, GT2 and Turbo. Mezger remained closely connected with the Porsche brand until he passed away on June 10, 2020, at the age of 90.
2. Motorsport Pedigree
Mezger built engines for the most demanding races in the world. His engines that were put into Porsche road cars have the same engineering approach. These engines are designed for long-term high performance. They are essentially overbuilt for road use. These engines were not designed to meet a certain price point. They were designed to provide the best performance. There were no corners cut with any Mezger engine.
3. Birth of the GT3
Many people view the 911 GT3 models as the pinnacle of the 911 range. One of the main reasons why is because of the track-focused, high-revving flat-six engine out back. It all started with the M96.79 engine found in the 996 GT3. The European market got the GT3 a few years before us and had the M96.76 engine, but the point is the same. The GT3 legend began in large part because of the incredible engine that powered it. This dry sump engine could rev to 8,200 rpm all day long. The engine was derived from the Porsche 911 GT1-9,8 which happened to win a little race called the 24 Hours of Le Mans. The street version of this engine is nearly bulletproof,f and the GT3 legend was born.
4. Turbocharged Versions
If the GT3 was just not powerful enough for you, Porsche had a solution. The GT2 and Turbo also used Mezger engines, but with a pair of turbochargers. They are not as high-revving as the normally aspirated units, but they offer more power and a lot more torque. And these engines are just as reliable.
5. Reliability
The Mezger engines are not just more powerful but also more reliable. The knock on the M96 and M97 engine series has long been the IMS bearing. But the Mezger versions don’t have the same design. Instead, they use plain bearings that are pressure-fed engine oil for lubrication. These bearings don’t fail. That alone makes the Mezger significantly more reliable.
6. Sound
Even if these engines were not more durable and powerful, people would buy them for their sound alone. It is not just their high-revving nature in naturally aspirated form. But the design of the engine itself, with features such as dual timing chains that give these engines a more characterful sound. They are more gravely and “motorsporty” sounding than the non-Mezger engines.
7. Power Upgrades
These engines were overbuilt and as such, are typically able to comfortably handle more power if you want to modify them. The turbo versions can easily be tuned to reliably make more power. Of course, every engine has its limitations, but the Mezger engine is robust enough to make more power without hurting reliability.
8. The 997 GT3 RS 4.0
Many people consider the 997 GT3 RS 4.0 to be the best Porsche 911 road car of all time. It just so happens to be equipped with the last Mezger engine. A 4.0L jewel making nearly 500 naturally aspirated horsepower. The engine revs to 8,500 rpm and has more character in it than an entire truckload of new 992.2 Carreras. The 4.0L marked the end of an era. It is the last and possibly the best road-going Mezger engine ever produced.
9. Rarity and Desirability
Not every Porsche got a Mezger engine. Technically, all the air-cooled 911s have a Mezger-designed engine, but they have been out of production for over a quarter of a century now. Only a small percentage of water-cooled Porsche engines were a Mezger design. And Porsche is not building any more of them. So, what is out there today is all that will ever be out there. These engines are found in the most desirable Porsche models, and these cars are collectible today and will continue to be collectible for the foreseeable future. If you buy a Porsche with a Mezger engine today, the chances are good that it will be worth the same or even more tomorrow. For the Silo, Joe Kucinski.
Hello AI Tinkerers and welcome to the latest Sci-Tech article here at The Silo. Get ready, You will want to pay attention because the spotlight is on this Dude because he knows how to get around ‘bad ai prompting’. Just recently, he has helped spin out 40 startups using one core skill. Can you guess which one? Yep. Prompting.
In the One-Shot video below, Kevin Leneway breaks down his real workflow for shipping AI products fast — using markdown checklists, agent coding, rubric-based UI design, and zero Figma.
“I don’t need Figma. I just prompt my way to a working front end.” — Kevin Leneway
While most people are still asking ChatGPT to write code snippets, Kevin is building full-stack products using nothing but prompts. In this One-Shot episode, he reveals the exact system he’s used to launch over 40 startups at Pioneer Square Labs. We break down:
How he writes BRDs and PRDs that don’t suck
Why vibe coding fails and how to actually use AI agents
The markdown checklist that replaces a product team
How to go from idea to working app with zero context switching
His open-source starter kit that makes Cursor and Claude 3.5 feel like magic
“I’ve helped launch six startups including Singlefile (singlefile.io, $24M raised), Recurrent (recurrentauto.com, $24M raised), Joon (joon.com, $9.5M raised), Gradient (gradient.io, $3.5M raised), Genba (genba.ai, acquired May 2022) and Enzzo (enzzo.ai, $3M raised).”
If you’re a builder, this will change how you work. No gimmicks. Just a ruthless focus on speed, clarity, and shipping. Watch now. Learn the system. Steal it. For the Silo, Joe at aitinkerers.org
While generative AI transforms how Americans shop, it’s also quietly powering a counterfeit crisis now spiraling out of control. A groundbreaking new report from Red Points and OnePoll, The Counterfeit Buyer Teardown, reveals that AI is no longer just helping consumers find the best deals—it’s helping them find fakes. From influencer-driven “dupe culture” to hyper-realistic fake storefronts, the study exposes a booming underground economy that’s been supercharged by technology. With 28% of counterfeit buyers now using AI tools to seek out knock-offs, and fraudulent social media ads spiking 179% in just one year, the findings deliver a wake-up call for brands, regulators, and shoppers alike. Red Points execs are available to break down the data, discuss solutions, and explain why this rapidly evolving trend is both a technological and ethical crisis for the digital marketplace. Interest here as we hope?
AI Supercharging U.S.and Other E-Commerce Counterfeit Crisis
An explosive new report, “The Counterfeit Buyer Teardown, ” paints a concerning picture of a rapidly evolving and increasingly sophisticated counterfeit goods market, driven by a new factor: Artificial Intelligence. Forget the back alleys; findings from the research—conducted by market research firm OnePoll and AI company Red Points in February 2025—highlight that the future of fakes is digital, AI-assisted, and alarmingly mainstream.
The convergence of technology, social media, and shifting consumer mindsets is reshaping e-commerce—and not always for the better. As AI accelerates both the spread and appeal of counterfeit goods, the challenge is no longer just spotting fakes—it’s confronting a counterfeit economy that’s growing smarter, faster, and harder to contain.
“As counterfeiters adopt advanced tools like AI, the fight against fakes is becoming more complex and more urgent,” said Laura Urquizu, CEO & President of Red Points. “We’re now seeing AI shape both the threat and the solution. In 2024 alone, our firm detected 4.3 million counterfeit infringements online—an alarming 15% increase year-over-year.”
Alarming indeed. Here are 5 key revelations from the study.
1. AI is the New Enabler of Counterfeiting – A Two-Sided Threat:
The Counterfeiters’ Edge: AI is dramatically lowering the barrier to entry for bad actors. They can now mimic brand listings, and impersonate social media accounts with unprecedented ease and speed. They can also effortlessly create professional-looking fake websites—a situation that, according to Red Points’ data, is projected to surge 70% in 2025.This isn’t just about cheap knock-offs anymore; it’s about sophisticated deception at scale.
The Consumers’ Assistant: Shockingly, 28% of online shoppers who bought fake goods used AI tools to find them. This isn’t a fringe behavior; it’s a growing trend, especially among Gen X, suggesting consumers are actively leveraging AI in their pursuit of cheaper alternatives. This fundamentally shifts the narrative – it’s not just about being tricked; some are actively seeking fakes with AI’s help.
2. Accidental Counterfeiting is a Major Problem – Trust Signals are Being Hijacked:
1 in 4 luxury counterfeit purchases are unintentional. This shatters the perception that buyers knowingly seek out high-end fakes. Realistic pricing, secure payment promises, and active (but fake) social media presence are successfully deceiving consumers. AI-generated legitimacy cues are becoming indistinguishable from the real deal.
Brands are Paying the Price for These Mistakes: A staggering one in three shoppers stop buying from the genuine brand after an accidental counterfeit experience. This highlights the significant damage to brand loyalty and future sales, even when the brand isn’t directly selling the fake. High-trust categories like luxury and toys are particularly vulnerable.
3. The “Dupe Economy” is Real and Influencer-Driven:
Nearly a third (31%) of intentional counterfeit buyers were swayed by influencer promotions. Social media is driving the demand for “dupes” – budget-friendly replicas. Authenticity is taking a backseat to price and perceived identical appearance, especially among younger demographics.
This isn’t just about saving money; it’s a shift in consumer mindset. The report suggests a growing acceptance of fakes as clever alternatives, fueled by social validation and influencer endorsements.
4. Marketplaces Remain Key, But Social Media and Fake Websites are Surging:
Marketplaces (both US and China-based) are still the primary channels for counterfeit purchases. However, fake websites (accounting for 34% of unintentional purchases) and social media are rapidly gaining ground as sophisticated avenues for distribution, amplified by AI’s ability to create convincing facades.
Social media ads redirecting to infringing websites saw a massive 179% year-over-year growth. This highlights the increasing sophistication of counterfeiters in leveraging advertising platforms to drive traffic to their fake storefronts.
5. Younger Generations are More Vulnerable in Key Categories:
Millennials are significantly more likely to have their personal data stolen after purchasing from fake websites (44% vs. 34% average). This suggests a higher susceptibility to sophisticated phishing scams disguised as legitimate e-commerce sites.
Gen Z and Millennials are 2-4 times more likely to accidentally purchase counterfeit luxury goods and toys compared to Baby Boomers. Their online savviness might be a double-edged sword, making them more exposed to deceptive listings.
This study serves as both a consumer alert and a brand wake-up call. The rise of AI as a tool for both counterfeiters and consumers is a seismic shift that demands urgent attention. With compelling data and a clear-eyed look at accidental purchases, influencer-driven “dupe culture,” and the growing sophistication of fake storefronts, the findings paint a stark warning for the future of online shopping.
“Counterfeiting poses a serious and evolving threat to innovative businesses and consumer safety,” notes Piotr Stryszowski, Senior Economist at the Organization for Economic Co-operation and Development (OECD). “Criminals constantly adapt, exploiting new technologies and shifting market trends—particularly in the online environment. To effectively counter this threat, policymakers need detailed, up-to-date information. This study makes an important contribution to our understanding of how counterfeiters operate and how consumers behave online.” Ultimately, The Counterfeit Buyer Teardown report underscores a new reality: counterfeiting is no longer confined to shady sellers or easily spotted scams—it’s embedded in the very technologies shaping modern commerce. As AI continues to blur the lines between real and fake, the pressure is on for brands, platforms, and policymakers to respond with equal speed and sophistication. Combating this growing threat will require more than just awareness—it demands collaboration, innovation, and a commitment to restoring trust in the digital marketplace before the counterfeit economy becomes the new normal. For the Silo, Merilee Kern.
Merilee Kern, MBA is a brand strategist and analyst who reports on industry change makers, movers, shakers and innovators: field experts and thought leaders, brands, products, services, destinations and events. Merilee is a regular contributor to the Silo. Connect with her at www.TheLuxeList.com and LinkedIN www.LinkedIn.com/in/MerileeKern.
Boulder, Colorado, March, 2025 – PS Audio announces the release of The Audiophile’s Guide, a comprehensive 10-volume series on every aspect of audio system setup, equipment selection, analog and digital technology, speaker placement, room acoustics, and other topics related to getting the most musical enjoyment from an audio system. Written by PS Audio CEO Paul McGowan, it’s the most complete body of high-end audio knowledge available anywhere.
The Audiophile’s Guide hardcover book series is filled with clear, practical wisdom and real-life examples that guide readers into getting the most from their audio systems, regardless of cost or complexity. The book includes how-to tips, step-by-step instructions, and real-world stories and examples including actual listening rooms and systems. Paul McGowan noted, “think of it as sitting down with a knowledgeable friend who’s sharing hard-won wisdom about how to make music come alive in your home.”
The 10 books in the series include:
The Stereo – learn the essential techniques that transform good systems into great ones, including speaker placement, system matching, developing critical listening skills, and more.
The Loudspeaker – even the world’s finest loudspeakers will not perform to their potential without proper setup. Master the techniques that help speakers disappear, leaving the music to float in three-dimensional space.
Analog Audio – navigate the world of turntables, phono cartridges, preamps and power amplifiers, and vacuum tubes, and find out about how analog sound continues to offer an extraordinary listening experience.
Digital Audio – from sampling an audio signal to reconstructing it in high-resolution sound, this volume explains and demystifies the digital audio signal path and the various technologies involved in achieving ultimate digital sound quality.
Vinyl – discover the secrets behind achieving the full potential of analog playback in this volume that covers every aspect of turntable setup, cartridge alignment, and phono stage optimization.
The Listening Room – the space in which we listen is a critical yet often overlooked aspect of musical enjoyment. This volume tells how to transform even challenging spaces into ideal listening environments.
The Subwoofer – explore the world of deep bass reproduction, its impact on music and movies, and how to achieve the best low-frequency performance in any listening room.
Headphones – learn about dynamic, planar magnetic, electrostatic, closed-back and open-air models and more, and how headphones can create an intimate connection to your favorite music.
Home Theater – enjoy movies and TV with the thrilling, immersive sound that a great multichannel audio setup can deliver. The book explains how to bring the cinema experience home.
The Collection – this volume distills the knowledge of the above books into everything learned from more than 50 years of Paul McGowan’s experience in audio. Like the other volumes in the series, it’s written in an accessible style yet filled with technical depth, to provide the ultimate roadmap to audio excellence and musical magic.
Volumes one through nine of The Audiophile’s Guide are available for a suggested retail price of $39.99 usd , with Volume 10, The Collection, offered at $49.99 usd. In addition, The Audiophile’s Guide Limited Run Collectors’ Edition is available as a deluxe series with case binding, with the books presented in a custom-made slipcase. Each Collectors’ Edition set is available at $499.00 usd with complimentary worldwide shipping.
About PS Audio Celebrating 50 years of bringing music to life, PS Audio has earned a worldwide reputation for excellence in manufacturing innovative, high-value, leading-edge audio products. Located in Boulder, Colorado at the foothills of the Rocky Mountains, PS Audio’s staff of talented designers, engineers, production and support people build each product to deliver extraordinary performance and musical satisfaction. The company’s wide range of award-winning products include the all-in-one Sprout100 integrated amplifier, audio components, power regenerators and power conditioners.