RAT ULTRASONIC VOCALIZATIONS AND THEIR IMPLICATION IN LIMITING ANTHROPOMORPHIZING IN ANIMAL RESEARCH
UNRAVELING ADAR EXPRESSION AND RNA EDITING ACTIVITY IN ADULT AND EMBRYONIC TISSUE IN THE SQUID EUPRYMNA BERRYI
OSMOSIS Science Magazine
Fall 2023 Edition
Cover designed by Israa Draz + ChatGPT
Writers:
Andrew Watts
Isabel DiLandro
Aiden Hills
Bezawit Mulatu
Kritim K. Rijal
Paxton Mills
Jack DuPuy
Kayla Friedman
Editors:
Isabel DiLandro
Paxton Mills
Joshua Pandian
Designers:
Israa Draz
Yifei Qi
Bezawit Mulatu
Editor in Chief, Osmosis Science Magazine
Dear Reader,
Thank you for opening Osmosis Magazine. We are excited to bring you more accessible, intriguing stories in science and technology spanning the disciplines of chemistry, environmental science, and mathematics. In addition to this edition’s science articles, we are excited to introduce faculty interviews, which chronicle select faculty members’ journeys into science and some of their passions outside of it. Interactive components such as crossword puzzles and word searches are also included.
Our goal as a magazine is to share the joy of science with the UR community in an accessible and entertaining fashion. We hope that you enjoy reading this Fall edition of Osmosis Magazine.
Happy reading,
Andrew Watts Editor in Chief, Osmosis Science Magazine
Meet the Team:
Table of Contents
1- How Shape Affects Drug Action By: Andrew Watts
4- Unraveling ADAR expression and RNA editing activity in adult and embryonic tissue in the squid Euprymna berryi By: Kayla Friedman
5- How neonicotinoids in pesticides impact bee behavior and ecological interactions By: Jack DuPuy
7- Rat ultrasonic vocalizations and their implication in limiting anthropomorphizing in animal research By: Isabel DiLandro
10- The Algebraic Background of Error-Correcting Code By: Aiden Hills
13- Pixels of Our Universe: About SImulation Hypothesis By: Kritim K. Rijal
17- The Algebraic Background of Error-Correcting Code by: Aiden Hills
19- Faculty Interview- Dr. Michael Leopold By: Paxton Mills
20- Osmosis Crossword Puzzle By: Andrew Watts
21- Osmosis Word Search By: Andrew Watts
“How Shape Affects Drug Action”
Andrew Watts
I. Drugs have shapes?
There are innumerable factors that influence the development and efficacy of pharmaceutical agents, but one of the most crucial considerations is the shape of the drug. Yes – the shape. Drugs come in all different shapes and sizes. Some are ovular capsules, while others are flat, circular tablets. There are even liquid drugs that lack any shape at all. While the shapes and sizes of pills is what most of us think when we hear about drug shapes, a synthetic chemist is more concerned with the drug’s relative spatial arrangement of atoms in molecules. In chemistry, the term that describes this phenomenon is stereochemistry To understand stereochemistry, it is vital to know that chemical molecules exist as threedimensional arrangements of atoms in space. The variability of a drug’s spatial arrangement, or shape, can have profound effects on its function, leading pharmaceutical companies to isolate a single “shape” to prevent adverse effects, and sometimes even lethal ones.
II. Why does shape matter?: A case study
In the 1950s a compound called Thalidomide was identified as a treatment for morning sickness for pregnant women.1 After its release into the market, children born to mothers who had taken Thalidomide showed severe birth defects including shortened arms, nonfunctional hands, and
malformed urinary gastrointestinal tracts. Sadly, these children were subject to a 50% survival rate. Thalidomide was found to be sold as a racemic mixture– which means that two stereoisomers, or shapes, were present in the drug. Further investigation into the two shapes of the drug found that only one of the shapes was therapeutically active, while the other shape was causing birth defects.
Figure 1: Two “shapes” of Thalidomide. The solid wedge means that the molecule is protruding forward, while the dashes mean the molecule goes back into the page. These two molecules are not the same. The “S” isomer causes birth defects.
how structure dictates chemical function.
Tokunaga, E., Yamamoto, T., Ito, E., & Shibata, N. (2018,
Thalidomide is a prime example of the importance of stereochemistry in pharmaceutical design and synthesis. Now, many drugs are developed and marketed in only one of their isomers, or shapes, mitigating any disastrous effects.
III. How to Select Shape
Figure 3: Asymmetric hydrogenation methodology developed by Noyori using chiral catalysts. Applications in pharmaceutical industry, such as synthesizing antibacterial levofloxacin. 2
Enantiopure drugs are much more common now, thanks to the advanced synthetic and purification methods developed in the last 50 years. Asymmetric catalysis is a field of chemistry that focuses on developing synthetic methods using catalysts to favor one enantiomer over another.
Notable advancements in enantioselective catalysis, particularly asymmetric catalysis, have been achieved through the development of various catalysts, such as chiral ligands and organocatalysts. The work of researchers like Ryoji Noyori, who was awarded the Nobel Prize in Chemistry in 2001 for his contributions to asymmetric hydrogenation, and Ei-ichi Negishi, who shared the Nobel Prize in Chemistry in 2010 for the development of palladium-catalyzed cross-couplings, have significantly impacted the field. 3 2
molecular catalysts". Nature Reviews Chemistry
Additionally, breakthroughs in biocatalysis and the use of enzymes as chiral catalysts have expanded the scope of enantioselective methods. These advancements underscore the importance of enantioselective strategies in the synthesis of complex molecules with defined stereochemistry, contributing to the progress of diverse scientific and industrial applications.4
IV. Impact
The ability to selectively manipulate and control the stereochemistry of drug molecules has not only enhanced the precision of pharmacological interactions but has also significantly expanded the therapeutic potential of various compounds. For instance, the separation of enantiomers in drugs like omeprazole, a proton pump inhibitor, has led to more effective treatments for acid-related disorders with reduced side effects. Additionally, the synthesis of enantiopure versions of certain antidepressants, such as fluoxetine, has improved their therapeutic profiles, offering 4 Noyori, R., et. al. (1987), "Asymmetric
enhanced relief for patients with mood disorders.5
Enantioselective synthesis has enabled the production of chiral drugs with improved efficacy and reduced side effects, leading to safer and more targeted treatments for a wide array of medical conditions. This paradigm shift has not only streamlined the drug discovery process but has also catalyzed the development of novel therapeutic agents that were once deemed unattainable. As the pharmaceutical landscape continues to evolve, the profound impact of enantioselective methods on drug design and synthesis promises to shape the future of medicine, offering new possibilities for personalized and highly effective therapeutic interventions.
Works Cited:
Tokunaga, E., Yamamoto, T., Ito, E., & Shibata, N. (2018, November 20).
Understanding the thalidomide chirality in biological processes by the selfdisproportionation of enantiomers. Nature News. https://www.nature.com/articles/s41598018-35457-6 5
Unraveling ADAR expression and RNA editing activity in adult
and embryonic tissue in the squid Euprymna berryi
Kayla Friedman1,2, Gjendine Voss Ph.D.1, and Joshua Rosenthal, Ph.D1
1 Marine Biological Laboratory, Woods Hole, Massachusetts, 2Department of Biology, University of Richmond, Virginia
RNA editing by adenosine deamination gives an organism the ability to alter its proteincoding sequences, resulting in an overall change in protein structure and function. Adenosine deaminases acting on RNA (ADARs) convert specific adenosines (A) into inosines (I) that are interpreted as guanosines (G) by the translational machinery. Coleoid cephalopods (octopus, squid, and cuttlefish), like vertebrates, possess two catalytically active ADAR enzymes, sqADAR1 and sqADAR2, but unlike any other organism, cephalopods recode transcripts by A-to-I RNA editing to an extraordinary extent, particularly in neuronal tissue. Recent advances in genome editing make the squid Euprymna berryi an ideal model to study intricate A-to-I RNA editing mechanisms. However, ADAR expression and activity throughout E. berryi development and adult tissue remains largely unknown. To characterize ADAR activity in E. berryi, our project utilized quantitative PCR and reverse transcription PCR with
subsequent Sanger Sequencing to quantify ADAR expression levels and to examine editing target sites, respectively. Our results indicate that RNA editing is most prominent in neuronal tissue, consistent with previous findings on Doryteuthis pealeii. ADAR1 was found to be most highly expressed in the adult brain and stellate ganglia, accompanied by elevated editing frequencies observed at ADAR1-dependent editing sites in these tissues. ADAR2, on the other hand, was expressed evenly across both nonneuronal and neuronal tissue, and sites edited by ADAR2 were edited at similar levels across tissues. We also determined that ADAR1 is expressed concurrently with neuronal development, agreeing with an observed increase in RNA editing activity in E. berryi embryos at the same time. These insights shed new light onto the complex regulatory mechanisms of RNA editing in cephalopods and provide a foundation for better understanding how these animals can achieve such a remarkably high level of recoding.
“How Neonicotinoids in pesticides impact bee behavior and ecological interactions”
Jack DuPuy
Animal pollination, primarily facilitated by bees, represents a crucial ecosystem service, benefiting nearly 90 percent of flowering plants and contributing to the prosperity of 75 percent of the world's most prevalent crops (Klein et al. 2006). Animal pollination forms the backbone of many ecosystems around the world by facilitating sexual reproduction of plants, and the breakdown of successful pollination would be catastrophic for human and nonhuman animals alike. The delicate balance of this important service and subsequent survival of bees, however, faces multifaceted challenges including but not limited to climate change, habitat loss and fragmentation, disease, urbanization, invasive species, intense management of domesticated bee populations, and increasing escalation of pesticide use on agricultural landscapes in the face of growing demand for food worldwide. Amidst these interconnected threats, the recent emergence of neonicotinoid insecticides has emerged as a conspicuous factor that demands our attention.
Since their introduction in the 1990s, pesticides containing neonicotinoid compounds have become the most widely used variety of pesticides due to their effectiveness at fighting a wide range of pests and increasing crop yield. As of 2011, neonicotinoids had been documented for over 140 crops in over 120 countries, and their insecticide market share grew from 16% to 24% in just three years from 2005 to 2008 (Jeschke et al. 2010). Neonicotinoid compounds are so effective at increasing crop yields in part due to their ability to be taken up systematically and permeate all plant tissues, and they can even be translocated into pollen and nectar the main source of food for bees (Lundin et al. 2015). These compounds also tend to have long half-lives, persisting in the environment for months or even years after application.
Unfortunately, the increase in crop yields when using neonicotinoids comes with the cost of negatively impacting important pollinators like bees. The recent decline in the abundance and well-being of bees, especially the well-known and widely populated honey bee (Apis mellifera), has sparked significant concern among ecologists, agriculturalists, and even the general public, and harmful effects of neonicotinoids have been theorized as one of the main perpetuating factors of the decline. It is well-documented that even small quantities of neonicotinoids can be fatal for honey bees, which is to be expected given the efficacy of neonicotinoids and the intended purpose of insecticides (Iwasa et al. 2004). Even if the bees don’t die, exposure to sublethal levels of neonicotinoids has been shown to negatively affect the lifespan, foraging ability, and social behaviors of honey bees (Zhao et al. 2022). One aspect of this complex, delicate system that has been far less studied, however, is the preferences of the bees themselves with respect to different food sources.
Because of their strong social network and communication abilities, honey bees have been shown to collectively prefer pollen/water/nectar from some resources over others, including flying longer distances for higher quality resources (Abou-Shaara 2014). One study even showed that honey bees prefer sucrose solutions laced with neonicotinoids over regular sucrose solutions despite the adverse effects that the chemicals have on their behaviors (Kessler et al. 2015). Honey bees clearly exhibit complex foraging preferences, and understanding those preferences will allow us to more successfully implement neonicotinoid alternatives into the environment and enact policies that benefit the pollinators. For example, if honey bees are willing to travel farther distances to reach neonicotinoid-laced nectar, then large-scale neonicotinoid bans may need to be enacted to ensure that honey
bees do not encounter the toxic chemicals and risk colony collapse. Ultimately, because of the various risks they pose to honey bees and other important pollinators, the agricultural industry will need to implement safer alternatives to neonicotinoid pesticides. These new regulations will have to be carried out despite rapidly increasing demands on food production and highly productive crop yields. Luckily, alternatives ranging from other chemical solutions to biological controls like predators to artificial selection for parasite-resilient plants have been demonstrated to effectively replace neonicotinoid pest control in 96% of cases (Jactel et al. 2019). Despite being banned in Europe since 2018 and being easily replaceable, the United States has only recently started to take action against these harmful pesticides. Recent bans in states including New Jersey, Maine, and Nevada represent positive steps forward, but we must take stronger action against these damaging neonicotinoids and invest in their alternatives. If we don’t, we are putting our pollinators and by extension our crops and ecosystems in a perilous situation.
Goulson, D. 2003. Effects of introduced bees on native ecosystems. Annual Review of Ecology, Evolution, and Systematics 34:1–26.
Iwasa, T., N. Motoyama, J. T. Ambrose, & R. M. Roe. 2004. Mechanism for the differential toxicity of neonicotinoid insecticides in the honey bee, Apis mellifera. Crop Protection 23:371–378. https://doi.org/10.1016/j.cropro.2003.08.018.
Jactel, H., F. Verheggen, D. Thiéry, A. J. Escobar-Gutiérrez, E. Gachet, & N. Desneux. 2019. Alternatives to neonicotinoids. Environment International 129:423–429. https://doi.org/10.1016/j.envint.2019.04.045
Jeschke, P., R. Nauen, M. Schindler, & A. Elbert. 2010. Overview of the status and global strategy for neonicotinoids. Journal of Agricultural and Food Chemistry 59:2897–2908. https://doi.org/10.1021/jf101303g
Kessler, S. C., E. J. Tiedeken, K. L. Simcock, S. Derveau, J. Mitchell, S. Softley, A. Radcliffe, J. C. Stout, & G. A. Wright. 2015. Bees prefer foods containing neonicotinoid pesticides. Nature 521:74–76. https://www.nature.com/articles/nature17177.
Klein, A.-M., B. E. Vaissière, J. H. Cane, I. Steffan-Dewenter, S. A. Cunningham, C. Kremen, & T. Tscharntke. 2006. Importance of pollinators in changing landscapes for world crops. Proceedings of the Royal Society B: Biological Sciences 274:303–313. https://doi.org/10.1098/rspb.2006.3721
Lan, J., G. Ding, W. Ma, Y. Jiang, & J. Huang. 2021. Pollen Source Affects Development and Behavioral Preferences in Honey Bees. Insects 12:130. https://doi.org/10.3390/insects12020130
Lundin O., M. Rundlöf, H. G. Smith, I. Fries, & R. Bommarco. 2015. Neonicotinoid insecticides and their impacts on bees: a systematic review of research approaches and identification of knowledge gaps. PLoS One 10. https://doi.org/10.1371/journal.pone.0136928
Muth, F., R. L. Gaxiola, & A. S. Leonard. 2020. No evidence for neonicotinoid preferences in the bumblebee Bombus impatiens. Royal Society Open Science 7(5). https://doi.org/10.1098/rsos.191883
Work Cited:
Abou-Shaara, H. 2014. The foraging behaviour of honey bees, Apis mellifera: a review. Veterinární medicína, 59: 1-10. https://pdfs.semanticscholar.org/a52e/c22762a9eed0d3e03a c26d8f22a3a621068b.pdf
Arce, A. N., A. Ramos Rodrigues, J. Yu, T. J. Colgan, Y. Wurm, & R. J. Gill. 2018. Foraging bumblebees acquire a preference for neonicotinoid-treated food with prolonged exposure. Proceedings of the Royal Society B: Biological Sciences 285 (1885). https://doi.org/10.1098/rspb.2018.0655.
Colin, T., W. G. Meikle, X. Wu, &A. B. Barron. 2019. Traces of a neonicotinoid induce precocious foraging and reduce foraging performance in honey bees. Environmental Science and Technology 53(14):8252–8261. https://doi.org/10.1021/acs.est.9b02452
Donaldson-Matasci, M. C., & A. Dornhaus. 2014. Dance communication affects consistency, but not breadth, of resource use in pollen-foraging honey bees. PLoS ONE, 9(10), e107527. https://doi.org/10.1371/journal.pone.0107527.
Shi, J., H. Yang, L. Yu, C. Liao, Y. Liu, M. Jin, W. Yan, & X. B. Wu. 2020. Sublethal acetamiprid doses negatively affect the lifespans and foraging behaviors of honey bee (apis mellifera L.) workers. Science of The Total Environment 738. https://doi.org/10.1016/j.scitotenv.2020.139924
Zhao, H., G. Li, X. Cui, H. Wang, Z. Liu, Y. Yang, & B. Xu. 2022. Review on effects of some insecticides on Honey Bee Health. Pesticide Biochemistry and Physiology 188. https://doi.org/10.1016/j.pestbp.2022.105219
RAT ULTRASONIC VOCALIZATIONS
AND THEIR IMPLICATION IN LIMITING
ANTHROPOMORPHIZING IN ANIMAL RESEARCH
By Isabel DiLandro
Emotions in humans are very difficult to measure in a scientifically responsible manner. They are fickle, difficult to define, and even more difficult still to attach numerical value to as a subjective, malleable thing. If affective science, also known as the study of emotion, is difficult to fully grasp with participants that can talk back to us, effectively providing verbal insight to the inner workings of their cognition and self-reporting their thoughts and feelings for interpretation, then what are researchers meant to do when it comes time to measure the same responses in subjects that cannot provide them with any of that information? That is only one of the many struggles one must take into consideration when conducting animal research. There are many behavioral measures that have been developed in response to these challenges, but each one has their limitations. However, one tool in particular is beginning to gain popularity for its useful quantitative advantage when evaluating behavior in animals specifically in the most common preclinical model: rats.
Ultrasonic vocalizations (USVs) are calls, inaudible to human ears, made by rats to communicate with each other that can vary in meaning and utility based on many factors such as age, affect, or even maternal status. Two different fundamental types of USVs have developed in adult rats which occur typically around 22 kilohertz (kHz) and 50 kilohertz.
While the 22 kHz calls are used to signify negative states and adverse situations, 50 kHz calls are emitted in appetitive situations and positive states. Another difference between the two lies within their appearance, which can be observed when displayed visually using software to supplement human’s lack of auditory discrimination. 22 kHz calls are long in duration, monotonous, and low peak frequency while 50 kHz calls have a short duration and a high bandwidth for mobility at a high peak frequency. Further division can be seen within the 50 kHz range, with both flat and modulated frequencies appearing based on more specific contexts surrounding the appetitive states. Reward and positive affect have been seen to be associated with the modulated calls while flat calls signify efforts to make contact and other functions related to social coordination. Interestingly, as rats age, their calls appear to bear a reduced bandwidth as well as reduced peak frequency and sound intensity, but the actual quality of message wavers very little and is still quite recognizable by other rats. Mechanistically, 22 kHz calls are thought to be connected to what is known as the mesolimbic cholinergic system, or a physiological system responsible for sending certain types of messages from the brain to the body, thought to be mediated by the neurotransmitter acetylcholine, a main excitatory neurotransmitter which is oversees the process of rousing the
body’s physiological response to stimuli. 50 kHz calls, on the other hand, correspond with the mesolimbic dopaminergic system – seemingly mediated by the neurotransmitter dopamine and the nucleus accumbens brain region, both of which are involved in the body’s reward system and act as mechanisms behind motivation and positive emotion.
Delving further the functional quality of these USVs, research has shown that 22 kHz calls are primarily for inter-colony signaling – warning fellow rats about incoming dangers. These calls are not for predators. Occasionally, these calls will also play an important role in dyadic communication between a simple pair of two rats
and are typically involved in a rat’s removal from social interaction in its collective community. 50 kHz calls, on the other hand, are found to be involved in a plethora of different functions. Contrary to 22 kHz calls, they are present during cooperative proximity to other colony members on activities such as mating or play and are also affiliated with positive anticipation of the described social contact They have been compared to human laughter and may contribute to the coordination of cohesion among social groups. Additionally, they may be able to indicate incentivization or motivation related to a specific stimuli as a quantifiable measure of positive affect elicited by rewards.
All of these matters aside, the real excitement behind USVs is their ability to change the way researchers are able to utilize their animal models. Precedented methods for understanding animal cognition are incredibly susceptible to anthropomorphization, or the attribution of human qualities to nonhuman things. This is dangerous, because it leaves what ought to be an objective scientific procedure open for major biases which could skew results or even create false ones. This is especially important to consider when it is taking into account that the lives of these research animals are real, meaningful lives. Not only do they deserve, at the bare minimum, to be treated with respect and dignity while they live, it is the duty of researchers to be searching for ways to extract the most meaning out of their sacrifice. USVs are an example of a behavioral observation that actively reduces ambiguity and the possibility of anthropomorphization. Instead of having to guess if an experimental manipulation is having the adverse effect it is meant to replicate, or even potentially have to perform more extreme, harmful tests, the USVs will speak for the animals. Of course, no method is perfect, and this is very much a continuously developing field. But, it is a step in the right direction towards reducing the hardships that animal models face.
Ultimately, by making studies more efficient and, most importantly, accurate, science can begin to reduce the number of animal lives it takes to make its discoveries. The most important rule to follow in animal research is to minimize
harm, suffering, and the number of animals used when conducting an experimental study. This means that in order to extract a scientific measure, the researcher must employ the smallest amount of animals possible while still accounting for an adequate sample size that will allow for significant results and devise a method that will identify the goal of the research question while causing the least mental and physical distress to the subjects. It is easy to quickly become desensitized and detached from empathy when working with animals, and it is even easier still to fall into the trap of knowing exactly what the animals ‘must’ be feeling based on previous conclusions made by very partial, very one-sided, human-centric thinking It remains up to humans to stay diligent and vigilant to check these biases and place emphasis on avoiding complacency in research and what we think we know.
Work Cited:
Brudzynski, S. M. (2013). Ethotransmission: Communication of emotional states through ultrasonic vocalization in rats. Current Opinion in Neurobiology, 23(3), 310–317. https://doi.org/10.1016/j.conb.2013.01.014
“The Algebraic Background of ErrorCorrecting Code”
Aiden Hills
Imagine a scenario where mission control on Earth must communicate with a satellite as it orbits Mars, nearly 240 million miles away. This communication relies on binary commands, where a simple digit, '0' or '1', can dictate critical actions. For instance, '0' might instruct the satellite to capture vital images of the surface of Mars, while '1' could command it to maintain its trajectory. Yet, this seemingly simple exchange is fraught with challenges. The vast emptiness of space is not silent; it is filled with noise that can corrupt these binary whispers. Electromagnetic interference, cosmic radiation, or even fluctuations in the satellite’s own electronics can transform a '0' into a '1' and vice versa. This alteration may seem minor, but its consequences can be monumental, potentially leading to a missed opportunity to capture crucial data or, in the worst case, a malfunctioning satellite. In such a highstakes exchange, how can we ensure that every command sent reaches the satellite accurately? The key lies in a blend of communication theory and algebra, manifested in error-correcting codes (ECCs). As we delve into the algebra behind ECCs, we uncover not just equations and theories, but a critical layer of assurance that enables reliable communication across the cosmos, impacting everything from space exploration to global communication networks.
2. The Necessity of Redundancy:
To combat errors in transmission, extra bits known as 'redundant bits' are inserted into messages. Instead of sending a '0', mission control would send '00000'. This redundancy is crucial for maintaining the integrity of communication with satellites. A single bit error could skew critical information, resulting in costly misinterpretations or operational failures. These redundant bits are therefore not merely a precaution; they are indispensable in ensuring that every piece of data received from space is as accurate as possible.
3. Decoding the Complexity: ECCs
Simplified
3.1
Basic Concepts:
In the field of error-correcting codes (ECCs), two foundational concepts emerge: 'linear codes' and 'Hamming distance.' A 'linear code' represents a systematic way of encoding messages where the original data is expanded into longer sequences that include built-in redundancy. This process ensures that even if parts of the message are distorted in transmission, the original information can be reconstructed. Think of linear codes as a form of packaging designed to protect the core content the message from the potential damages of an unpredictable journey through a communication channel. 'Hamming distance', on the other hand, is the metric
used to measure the minimum number of substitutions required to change one sequence into another. In the context of ECCs, it quantifies the error between the sent message and the received one. It's similar to examining two strings of Christmas lights to determine the minimum number of bulbs that need to be changed to make both strings identical. This concept, although abstract, is critical for ensuring that communication is not just secure but also robust enough to withstand and correct errors.
3.2 Hamming Code:
Delving into a classic example, we have the Hamming (7,4) code, named after Richard Hamming, who introduced this famous linear code. It takes a simple 4bit message, which could represent anything from a command to a satellite, like 'take a picture' (0000) or 'adjust orbit' (1001), and encodes it into a 7-bit sequence. This encoding involves the addition of three specially calculated redundant bits. Together, these bits form a detectable pattern that, when disturbed, indicates that an error has occurred during transmission. The beauty of this code lies in its ability to not only detect but also correct a single-bit error within the 7-bit sequence. For example, consider a scenario where '0000000' is transmitted, but due to some interference, it is received as '0001000'. The Hamming code allows us to
compare the received message against the expected pattern and discern that the fourth bit is out of place. Using predefined rules, the system can then infer the original message, correcting the error without the need for retransmission. This capability is not merely a mathematical exercise; it has tangible implications. In satellite communications, a single bit error in a command sequence can misdirect a satellite's path or corrupt critical data. Linear codes like the Hamming code serves as an essential safeguard, enabling us to trust the commands and data transmitted over vast distances.
4. Behind the Scenes: The Encoding Process
4.1 Generator Matrix:
The encoding magic happens through something called a 'generator matrix.' This matrix takes our original message and, following specific mathematical rules, adds the redundant bits. Although it works through advanced algebra, you can think of it like a recipe, turning our basic ingredients (the original message) into a robust dish (the error-protected code) that can withstand the heat (errors during transmission).
4.2 Parity Check Matrix:
Once we've received a message, how do we ensure it's error-free? Enter the 'parity check matrix.' This matrix interacts with the received sequence of bits, checking if
the redundancy pattern is disrupted. If the pattern doesn't match expectations, it indicates an error that needs correction.
Essentially, this matrix acts as our quality control, confirming the 'dish' (message) is just as intended.
ECCs might seem like an intricate ballet of numbers, yet their core purpose is refreshingly clear: they act as our digital guardians, shielding communications from the chaos and unpredictability of data transmission errors. This protection extends far beyond the realms of satellite signals.
ECCs not only rescue scratched CDs, allowing uninterrupted music and data access, but they also ensure clarity in calls and accuracy in QR code scanning, safeguarding information despite physical or signal damage. These silent guardians are essential in preserving the integrity of our digital data, from errorfree phone communications to protecting the documents and photos stored on our personal devices. They're even embedded in the algorithms that keep our financial transactions secure, constantly working behind the scenes to ensure that every digit of our account numbers and every byte of our personal information is transmitted accurately and securely. As we continue into an era where data is king, the role of ECCs becomes ever more critical. They are not just a mathematical curiosity but a fundamental component in the infrastructure of our
digital lives, quietly ensuring reliability and accuracy in the face of potential digital disruption.
Works Cited:
Gallian, Joseph A., and Joseph A. Gallian. “Introduction to Algebraic Coding Theory.” Contemporary Abstract Algebra, CRC, Taylor & Francis Group, Boca Raton, FL, 2021, pp. 526–555.
PIXELS OF OUR UNIVERSE
By Kritim K. Rijal
The notion of humans living in a simulation gets tossed around quite often in general conversations From sci-fi movies to advanced video games, we experience the gist of a simulated world in various forms. From there questioning the “reality” of our own worldly experience is but a mere step away. The perpetual advancement in the field of AI, which has continued to predict, imitate, and almost create experiences that seem previously intrinsic to humans alone, has not helped the case for conspiracy theorists However, all of this is observational Its basis is on human intuition alone. To engage with the idea is one thing but to whole-heartedly convince the entire species of their fake existence might require more than the enunciation of a gut feeling. So, what does that potential logical reasoning look like? Allow me to share.
THE SIMULATION HYPOTHESIS
To reason the the simulation world is the cr more advanced fundamentally, in an artificiall are but mere c the push of a b effect, because living in a sim simulated versi Having said tha
instance, as you play your favorite video games, it is hard to unsee the potential for one of the characters (made more and more lifelike with advancing technology) in the game could be another human just like you If that is the case, and that is the kind of simulation we are predicting ourselves to be in, then there are a few relative attributes we can look for in our world as evidence of the simulation hypothesis
That’s where my interpretation of Planck’s constant comes in. Have you ever tried zooming in on a picture? Of course, you have. Stupid question. But have you tried really zooming in? That’s where the magic happens No matter the quality of the picture, after a certain point you’ll start to see the nice and smooth curves of your pictures break into clunky squares of a single color. Those squares are called pixels They are what your picture is made of A finer detail for your image doesn’t exist. And these pixels, or a version of them, would also
be present in any kind of simulated environment that our computers create. They can be thought of as the limit of the reality for these simulations. And the leap of my argument here is that if our world is a simulation, Planck’s constant would be that pixel of our world. Planck’s constant is a physics concept that describes a very small conceptual length that is impenetrable A shorter distance conceptually doesn’t exist Attempting to go past this length and account for any activity
changes that happen there (movement of objects, distance covered, speed) would break the law of physics. A finer detail that goes beyond the minute subatomic particles and
can only be accounted for as a unit that comprises all of our universe like a pixel of your image Thus, the existence of this constant replicates the behavior of pixelated simulation, thereby making an argument for the simulation hypothesis
I believe another imprint that resembles human simulation is waves Similar to the light that passes through the circuit in computers and gives life to the simulation, waves (in their various forms) have directed energy and given life to objects all around the universe. Sound, light, electromagnetic, and now even matter can be represented by waves with the help of quantum physics Ever since the big bang, waves have traveled far and wide lighting up the universe, just as one can imagine the motherboard of a computer with all its circuits lighting up with electricity as it gives life to simulation. And if I dare to extrapolate the metaphor, evidenced by the observation through telescopes, our universe is still expanding. This means the simulation that was meant to be created is not completed in its entirety. In other words, the reason for our creation has yet to happen. We have it to look forward to
But the problem with imprints or any evidence we look for in our world is that they are subject to interrogation. These attributes of our world that we argue to be proof of a simulated environment perhaps in fact are a part of the simulation itself. These imprints might be intentionally created with the purpose of us to experience it. We wouldn’t be able to tell it apart. This demonstrates a need for an argument that surpasses any observation biases. Following probabilistic methodology of approaching the problem provides us with a feel for how that argument may look. If we assume that we are part of a simulation then given the development in video games and other sections of computing progress we can tell that we have created simulations of our own. Now, who is to say that the characters of these simulations have not created a simulation of their own? Thus, a recursive domino-like chain is created where the “real” world is only one and the rest (which can be in a number of millions) are all simulations. This makes the odds for us not to be in a simulation 1 in a million Pretty slim chance
As an unproven hypothesis, this debate of our possible existence as a simulation is an everlasting one Any intuitive analogy is as compelling an argument as any mathematical proof. However, more interesting is to contemplate the implication of the case if we are in a simulation. What does that mean for humans as a collective species? The big questions of why we are here and what is our objective will find a slight nudge of direction for answers Like any simulation, our objective would be to replicate the behavior of whatever hypothetical scenario we have been created for. In essence, our purpose would be an unintentional contribution to knowledge. Perhaps one can find solace in the fact that by just being their truest self they have become part of something huge. In fact, their truest self is what they have been created to be.
And on the off chance that we happened to be unintentional byproducts in the process of creating something huge which seems equally likely (given the immensity of the Brobdingaian universe) and unlikely (given the odds that had to happen for the creation of life on Earth), we would not have a purpose. And there is a peculiar comfort in that unbounding freedom too where your life is in all sense your life and your choices for it are solely yours to make.
IThe Algebraic Background of Error-Correcting Code Aiden Hills
magine a scenario where mission control on Earth must communicate with a satellite as it orbits Mars, nearly 240 million miles away. This communication relies on binary commands, where a simple digit, '0'
or '1', can dictate critical actions. For instance, '0' might instruct the satellite to capture vital images of the surface of Mars, while '1' could command it to maintain its trajectory. Yet, this seemingly simple exchange is fraught with challenges. The vast emptiness of space is not silent; it is filled with noise that can corrupt these binary whispers. Electromagnetic interference, cosmic radiat -ion, or even fluctuations in the satellite’s own electronics can transform a '0' into a '1' and vice versa.
This alteration may seem minor, but its consequences can be monumental, potentially leading to a missed opportunity to capture crucial data or, in the worst case, a malfunctioning satellite. In such a highstakes exchange, how can we ensure that every command sent reaches the satellite accurately? The key lies in a blend of commu -nication theory and algebra, manifested in error-correcting codes (ECCs).
As we delve into the algebra behind ECCs, we uncover not just equations and theories, but a critical layer of assurance that enables reliable communication across the cosmos, impacting everything from space exploration to global communication networks.
The Necessity of Redundancy
To combat errors in transmission, extra bits known as 'redundant bits' are inserted into messages. Instead of sending a '0', mission control would send '00000'. This redundancy is crucial for maintaining the integrity of communication with satellites. A single bit error could skew critical information, resulting in costly misinterpretations or operational failures. These redundant bits are therefore not merely a precaution; they are indispensable in ensuring that every piece of data received from space is as accurate as possible.
Decoding the Complexity: ECCs Simplified Basic Concepts:
In the field of error-correcting codes two foundational concepts emerge: ‘linear
'Hamming distance', on the other hand, 'codes' and 'Hamming distance’.
A 'linear code' represents a syste -matic way of encoding messages where the original data is expanded into longer sequences that include built-in
-ween the sent message and the received one. It's similar to examin-ining two strings of Christmas lights redundancy. This process ensures
that even if parts of the message are distorted in transmission, the original information can be reconstructed. Think of linear codes as a form of packaging designed to protect the core content—message—from the potential damages of an unpredictable journey through a communication channel.
'is the metric used to measure the mi -nimum number of substitutions required to change one sequence into another. In the context of ECCs, it quantifies the error bet to determine the minimum number of bulbs that need to be changed to make both strings identical. This concept, although abstract, is critical for ensuring that communication is not just secure but also robust enough to withstand and correct errors.
Delving into a classic example, we have the Hamming (7,4) code, named after Richard Hamming, who introduced this famous linear code. It takes a simple 4-bit message, which could represent anything from a command to a satellite, like 'take a picture' (0000) or 'adjust orbit' (1001), and encodes it into a 7-bit sequence. This encoding involves the addition of three specially calculated redundant bits. Together, these bits form a detectable pattern that, when disturbed, indicates that an error has occurred during transmission. The beauty of this code lies in its ability to not only detect but also correct a single-bit error within the 7bit sequence.
For example, consider a scenario where '0000000' is transmitted, but due to some interference, it is received as '0001000'. The Hamming code allows us to compare the received message against the expected pattern and discern that the fourth bit is out of place. Using predefined rules, the system can then infer the original message, correcting the error without the need for retransmission. This capability is not merely a mathematical exercise; it has tangible implications. In satellite communications, a single bit error in a command sequence can misdirect a satellite's path or corrupt critical data. Linear codes like the Hamming code serves as an essential safeguard, enabling us to trust the commands and data transmitted over vast distances.
Behind the Scenes: The Encoding Process Generator Matrix:
The encoding magic happens through something called a 'generator matrix.' This matrix takes our original message and, following specific mathematical rules, adds the redundant bits. Although it works through advanced algebra, you can think of it like a recipe, turning our basic ingredients (the original message) into a robust dish (the errorprotected code) that can withstand the heat (errors during transmission).
Parity Check Matrix:
Once we've received a message, how do we ensure it's error-free? Enter the 'parity check matrix.' This matrix interacts with the received sequence of bits, checking if the redundancy pattern is disrupted. If the pattern doesn't match expectations, it indicates an error that needs correction. Essentially, this matrix acts as our quality control, confirming the 'dish' (message) is justasintended.
ECCs might seem like an intricate ballet of numbers, yet their core purpose is refreshingly clear: they act as our digital guardians, shielding communications from the chaos and unpredictability of data trans -mission errors. This protection extends far beyondtherealmsofsatellitesignals.
ECCs not only rescue scratched CDs, allow -ing uninterrupted music and data access, but they also ensure clarity in calls and accuracy in QR code scanning, safeguarding informat -ion despite physical or signal damage. These silent guardians are essential in preserving the integrity of our digital data, from error-free phone communications to protecting the documents and photos stored on our personal devices. They're even embedded in the algorithms that keep our financial transac -tions secure, constantly working behind the scenes to ensure that every digit of our account numbers and every byte of our personal information is transmitted accurately and securely. As we continue into an era where data is king, the role of ECCs becomes ever more critical. They are not just a mathematical curiosity but a fundamental component in the infrastructure of our digital lives, quietly ensuring reliability and accuracy in the face ofpotentialdigitaldisruption.
Works Cited
Gallian, Joseph A., and Joseph A. Gallian. “Introduction to Algebraic Coding Theory.” ContemporaryAbstractAlgebra, CRC, Taylor & Francis Group, Boca Raton, FL, 2021, pp. 526–555.
“Faculty Interview: Dr. Leo”
- Paxton Mills
Originally from Yorktown, Virginia, Dr. Michael Leopold, Floyd D. and Elisabeth S. Gottwald Professor of Chemistry, came to the University of Richmond with a mission: to show the world that undergraduates are capable of doing R1 research. He was drawn to the University for its vision of “combining scholarship with excellent teaching” and the Chemistry Department’s mentorship of undergraduates in non-traditional classrooms (i.e., research laboratories).
“The University recognizes the value of those experiences and has provided a lot of resources including stipends for students, salaries for faculty, support personnel, equipment, and materials. When I tell colleagues at other primarily undergraduate institutions (PUIs) what our institution provides to promote research, they are blown away,” Leopold says.
At UR, Dr. Leopold and his team of student researchers study bioanalytical nanomaterials and biosensors, work which has involved topics ranging from explosive detection, medical device development, and forensic field tests for fentanyl. He oversees a bustling and productive research lab, with an abundance of student presentations and publications over its long history. His science has taken him across the country, from coast to coast and even back home to his own kitchen table, where he shares with his own children “the idea of discovery and exploration – seeing, doing, and achieving things, even small things, that have never been done.
Exploring where the textbooks end. They love it and it makes them dream.”
Outside of the halls of Gottwald, Leo spends what little free time he has with his veterinarian wife and two awesome kids: Mick and Kelsey. If he had the opportunity to teach a non-STEM centered class at Richmond, Leo says he would design a course on the meaning behind Bruce Springsteen lyrics, while the chemistry course of his dreams would involve the surface analysis of materials. “I’m planning it in my mind already!” Leo exclaimed.
Anyone who knows Dr. Leopold knows that he never strays too far from the Science Center; “I have my biological kids and then I have the students here,” Leo says. As for what keeps him engaged and connected to his UR students, he says “I love showing students that they are capable of doing h ard things. My job is to get you where you are going and to be happy doing it.” Dr. Leo also emphasizes the importance for all of us that we effectively train a new generation of scientists and doctors with societal drive and a passion for making a difference.
Dr. Leo is hopeful for the future of Gottwald in preparing students for whatever comes next after their graduation. “We, the faculty, should continue to work hard to ensure the reality matches the promise. Therefore, it is critical for professors to facilitate and inspire successful examples of this mentorship.”