Immerse 2016

Page 1



Prof. Bhaskar Ramamurthi, Director, IIT Madras

Message from the Director I am delighted to be a part of the third issue of Immerse. We can now paraphrase Ian Fleming and aver that with this issue, Immerse is neither happenstance nor co-incidence – it is here to stay. Glancing through some of the articles in this issue, I am happy to note that they retain reader interest by combining an “xxxx made simple� approach to science and engineering, along with glimpses of the people behind the research. The photographs and sketches bring the ideas to life. The all-to-brief cameos about the authors add garnish to the articles. This issue, more than ever, brings out the extent to which a vibrant research culture has embedded itself at IIT Madras. I am grateful to GE for supporting this praiseworthy venture of our students. i| Immerse


From the Editors

Second row (L to R): Sanket, Nithin, and Swetamber First row (L to R): Kiranmayi, Raghavi, and Rohit

Greetings from Team Immerse! Immerse is the IIT Madras Magazine on Research in Science and Engineering. Our endeavour is not only to showcase some of the recent developments in research and innovation at IIT Madras, but also to communicate the science behind them in the simplest way possible for better understanding and appreciation. This year marks the third year of Immerse. Our cover page this year is in celebration of 2015 - UNESCO’s International Year of Light. Just like the seven colours of light, this year’s issue covers a spectrum of seven themes - communication, computing, economics, health, energy, defense and materials. In each of these features, besides the technical and emotional facets of research at IITM – from the faculty’s passion and expertise to the research scholars’ enthusiasm and perseverance – we have tried to share with you our own fascination for the admirable work going on. We thank the Director and the Dean of Students for their constant encouragement. We gratefully acknowledge our sponsor GE. We save our most special thanks for all the professors and students who have been gracious and generous with their time and effort while acquainting us with their work. Above all, we will consider our job well done if this issue rekindles the curiosity of even one reader, inspiring them to indulge themselves in the exciting world of science and engineering. Carl Sagan once said, “Somewhere, something incredible is waiting to be known”, to which we add – “in the following pages.” Immerse yourself!

ii| Immerse


IMMERSE IIT Madras Magazine on Research in Science and Engineering

www.t5eiitm.org/immerse

For Immerse Editors

Contributors

Kiranmayi Malapaka Nithin Ramesan Raghavi Kodati Rohit Parasnis Sanket Wani Swetamber Das Akshay Govindaraj Ananth Sundararaman Aparnna Suresh Arundhathi Krishnan Aryendra Sharma Aslamah Rahiman Ayyappadas AM Tejdeep Reddy Isha Bhallamudi Nikhil Mulinti Rahul Vadaga Sachin Nayak Shivani Guptasarma

Consulting Editor

Nithyanand Rao

Photographs

Vivekanandan N

www.shaastra.org

Design

Typesetting

Amritha Elangovan Sree Ram Sai Vishal Upendran Raghavi Kodati Nithin Ramesan Rohit Parasnis Sanket Wani Swetamber Das

For Shaastra

Sponsorship

Raghul Manosh Bhavik Rasyara Shashanka S Rao Mahesh Kurup

Except the images, which belong to the respective copyright holders as identified, all content is ŠThe Fi th Estate, IIT Madras. This magazine was typeset entirely in LATEX.


Industrial Internet: Bringing Big Iron and Big Data for Better Quality of Life Vinay Jammu Technology Leader - Asset Performance Analytics, Global Research, GE India Technology Centre, Whitefield, Bangalore-560066

Connectivity and computing power have been changing our world in a significant way over the past two decades. E ficiencies of consumer oriented industries such as retail, banking, transportation, and hospitality have been transformed. Today we carry the same computing power in our cell phone as the supercomputer of the s. The Consumer Internet has enabled + billion people to be connected to the internet to exchange information and get improved services. This has been enabled by apps such as Uber, Amazon, iTunes, which connect the service provider directly to the consumer through digitisation, eliminating intermediary processes that drive ine ficiencies. New Consumer Internet giants such as Amazon, Apple and Google were born in the past couple of decades and grew significantly by adding $ billion in new revenues in just one decade. The Industrial Internet will have bigger impact by improving e ficiencies in industries such as power generation and distribution, healthcare, transportation and manufacturing, to name a few. Improved e ficiencies in these industries means lower cost and better quality for healthcare services for all of us, more reliable, uninterrupted power, lower cost of travelling and reduced emissions and greenhouse gases. The Industrial Internet connects big iron such as jet engines with big data to optimise the performance of these big industrial assets thereby saving crores of rupees for the owners, operations and users. For example, a % improvement in India's power grid would amount to additional capacity of gigawatt of power equal to coal plants of megawatt capacity. It saves the cost of adding these plants and eliminates the greenhouse gas emissions that would have come from these plants. To enable these e ficiencies arising from the Industrial Internet, new technologies are needed. New sensing technologies that can provide measurement of critical parameters such as temperatures and pressure in jet engines are needed. Low cost, reliable and secure communication technologies to transmit large amounts of data from mobile and remote assets operating in harsh environments would be a key enabler for the Industrial Internet. Finally, cloud enabled technologies to manage the data and perform advanced analytics to optimise performance of these machines, extend the life of the critical parts, predict and prevent unscheduled outages and failures, optimise parts inventory, perform the right maintenance at the right time with the least amount of time and resources. To achieve the best e ficiencies of scale, end-to-end solutions are needed that can improve e ficiencies from fuel production, fuel transportation, electrical generation, transmission and distribution and electricity use as an example for power production. This requires an ecosystem of interoperable solutions that can be stitched together for di ferent applications. iv| GE


To support this broader vision, GE is working on two technologies that enable delivery of value that the Industrial Internet promises. GE's Predix platform is the first cloud-based open Industrial Internet platform designed to host industrial data and industrial applications (https://www.geso tware.com/predix). It is an open platform that enables combining solutions from di ferent companies to solve Industrial Internet e ficiency challenges. The second technology GE is working on is the Digital Twin. Digital Twin is a digital clone of a physical asset such as a jet engine or MRI machine that provides key insights and recommendations on improving e ficiencies of the assets. A Digital Twin can provide key insights into remaining useful life of parts in industrial equipment, prognose and prevent impending failures and provide recommendations to optimise its performance. To enable this Industrial Internet revolution, GE along with IBM, AT&T, Cisco, Intel has founded the Industrial Internet Consortium (http://www.iiconsortium.org/) with the vision of setting the architectural framework and direction for the Industrial Internet. The Industrial Internet will be the next technology wave to revolutionise e ficiencies of industrial processes and provide better quality of life for all people. ⌅

v| Immerse





“It is an experience like no other experience I can describe, the best thing that can happen to a scientist, realising that something that’s happened in his or her mind exactly corresponds to something that happens in nature. It’s startling every time it occurs. One is surprised that a construct of one’s mind can actually be realised in the honest-to-goodness world out there. A great

shock, a great, great joy.

- LEO KADANOFF (1937-2015)



I

ndia shares a large tract of mountainous borders with multiple countries, manning which is a nightmare for the army. Not only is it di ficult to build roads at high altitudes, but the army also has to contend with landslides and avalanches while traversing these roads. Naturally, one thinks, “Why not fly rather than go from one place to another on the ground?” Yes, we are talking about jetpacks - devices worn on the back that allow a single user to fly by means of propulsion produced by rapidly expelled gases. Jetpacks are for real now but they are nothing close to the kind you see in science fiction movies. They have been used only for ceremonial purposes and no armed force in the world currently uses it. We talk to Major Lakshyajeet Singh Chauhan, an engineer in the Indian Army, about his research project on developing a rocket backpack motor for the Indian Army. Maj. LS Chauhan details the hardships faced by the Indian Army in remote locations through his own experience of working in the army for years. He says, “I was posted in the Siachen glacier as part of my infantry attachment. We used to witness sudden snowfalls and all the routes used to be blocked for days at a stretch. It is impossible to land a helicopter at such altitudes. If someone was ill, there was no way you could take him out of there in such conditions.” He goes on to say that due to ruthless weather, he had to spend a much longer time at the Siachen than he was designated to. Jetpacks can be a real boon for the Indian Army. One can go to remote places without any hassle. Consider the same example of the Indian Army guarding mountain borders. If a couple of people in a company can fly, they can build a ropeway between the cli fs of two mountains, instead of

the entire company coming down and climbing up again. The time taken to go from one peak to another is reduced. With this idea in mind, Maj. LS Chauhan started his M.Tech. project at IIT Madras. He discussed it with his guide Prof. PA Ramakrishna from the Department of Aeronautical Engineering, who agreed to work with him on this and give him the required guidance. Prof. Ramakrishna explains, “It is more like this... if you see monkeys climbing trees to get at the fruit, you will observe that they will first climb one tree and jump from one tree to another. People working in such fields do the same. It is possible in this terrain too. It is basically the same thing. One somehow manages to climb one peak and uses that advantage to go to another. This was the idea with which we started this work.”

Yes, we are talking about jetpacks devices worn on the back that allow a single user to fly by means of propulsion produced by rapidly expelled gases. Rocket backpacks have been around for sometime now but they have been only used for fun and recreation. They have a very low flight time and can just about carry a person. This project aims to build a portable one that can carry a soldier and his equipment. There was an attempt by Bell Helicopters to make one such device for the US Military. They built a device called the Bell Rocket Belt which was eventually rejected because it was expensive and there was no scope for carrying payload. Let us take apart the rocket used in a jetpack. In the simplest terms, a rocket has a combustion chamber in which fuel comes in contact with a chemical called the oxidiser that helps the fuel burn.

Major Lakshyajeet Singh Chauhan was an M.Tech. student (2013-2015) in the Department of Aerospace Engineering at IIT Madras. Before his Masters, he worked for 9 years for the Indian Army and was posted in Siachen and Kargil, among other places. He is currently an instructor in the Faculty of Aeronautical Engineering at the Military College of Electronics and Mechanical Engineering, Secunderabad.

| Aerospace Engineering


Jetpacks — A Boon for the Indian Army Surprise Attack in the Mountains: A quick climb onto the mountains will help carry out a surprise attack on the enemy who is still struggling to climb up. In general practice, the Indian Army uses mountaineering equipment to climb mountains for surprise attacks. Counter Insurgency Operations: A quick li t-o f with the rocket backpack while chasing a militant will enable a soldier to take a shot at the enemy or guide his forces suitably. NSG (National Security Guard) Operations: The NSG operation on the Taj hotel during / saw immense involvement of helicopters. Jetpacks will enable NSG commandos to easily reach the roofs of the establishments under attack. Attack Operations: It is di ficult to attack the enemy from the front, when they have laid down mines and put barbed wire. Rocket backpacks can be used to jump over these obstacles and charge on the enemy. High Altitude Operations: The Indian Army has been deployed at Siachen - the highest battlefield in the world. Jetpacks will help relocate from one post to another, get supplies and evacuate casualties. During attacks, it will help in raiding enemy posts located at peaks and in faster deployment of troops. Recovery Operations: Jetpacks can be utilised to recover fallen vehicles, attend to casualties, recover important documents, etc. They will also be e fective in reaching remote locations and assist armed forces in withdrawal from covert operations.

What they (the Indian Army) have is the following: when they enter enemy territory, the last metres are a minefield guarded by the entrenched enemy. They face huge casualties while running through this. It is a throw of dice - there’s a high probability that one might step on a mine and lose one’s leg or lose one’s life. With this rocket backpack motor, one can catapult oneself over the last metres and then take on the enemy.

| Immerse


Prof. PA Ramakrishna obtained his PhD from the Indian Institute of Science, Bangalore and joined IIT Madras in 2005. Currently, he is working as a Professor in the Department of Aerospace Engineering. His research interests are in aerospace propulsion, especially, solid and hybrid propellant combustion.

The burning of the fuel and the oxidiser controls the thrust. Controlling this thrust is crucial to control the motion of the person wearing the rocket backpack. Initially, for one to go up, the thrust has be greater than the pilot’s weight. And when the same person has to come down, he has to reduce his thrust so that it is less than his weight.

Maj. LS Chauhan compares his experiences of working in the army with that of working on the research project. The type of fuel and oxidiser involved is used to classify the rocket. For example, in a liquid rocket, both the fuel and oxidiser are liquids while in a solid rocket, both of them are solids. The liquid rocket used in the Bell Rocket Belt had Hydrogen peroxide as its oxidiser which was hard to handle since it was explosive and ate through metal. Maj. LS Chauhan and Prof. Ramakrishna were looking for something more benign than that. They wanted hybrid rockets to do the job for them. Such rockets have a liquid oxidiser and a solid fuel. Their primary advantage is that you can control the flow rate of oxygen and get the thrust level you require. This feature also exists with liquid rockets but not with solid rockets. But it is easier to control thrust in hybrid rockets than in liquid rockets as only one out of the fuel or the oxidiser is a liquid. Moreover, hybrid rockets are known to be very safe. Prof. Ramakrishna’s lab has been working on them for over years now. They have conducted over experiments on them without any mishaps. It is important to take safety into consideration when there is a human being on the other side. They plan

to use water as the oxidiser in this system as it is not only very safe but also readily available. Maj. LS Chauhan compares his experiences of working in the army with that of working on the research project. He says that he was only doing a maintenance job in the army. A ter joining academia, he had to put his mind back into the thinking mode. He describes his experience of working for - months on something new and innovative as wonderful. The work was as strenuous as the army where he had to spend - hours every week. Yet, he says that he did not really mind working long hours as the work was very interesting. He feels that failure is a part of research. “Even though I put in my best e forts, something or the other kept going wrong. Sometimes, I thought I was on the wrong track. But, I kept going and eventually finished the propulsion part of the project and even designed the combustion chamber for the rocket backpack motor.” The di ferent stages of the project are illustrated through the flowchart below.

| Aerospace Engineering


Ignition

Continuous combustion

Experimental set-up

He also acknowledges the need for a good mentor in a research project with the following words: “My professor helped me out a lot. Only the idea was mine. Most of the thinking and the design was done by him. I was only following his instructions.” One might ask the following question: “Bell Helicopters was a large organisation developing the rocket belt. How can a few students in the lab do it?”, to which Prof. Ramakrishna has a ready reply. “There are no technical di ficulties associated with the project. We do have all the technology to develop it. Only its implementation and engineering within the weight budget and safety considerations is hard. All these things need to be

taken care of and the product needs to be brought out.” As a consequence of Maj. LS Chauhan’s work, the proposal of inclusion of the project under the Army Technology Board has been forwarded by the Military College of EME, where Maj. LS Chauhan is posted currently, to IIT Madras. This collaborative project worth | lakhs is expected to start in January . It aims to develop a rocket backpack system capable of carrying a soldier for at least one minute at a cruise speed of km/hr. All this needs to be done within a budget of | lakhs per backpack to ensure that the large requirement for such backpacks can be met at a reasonable cost.

| Immerse


Despite the availability of advanced technology and the required funding, the main challenge of the human body not being adapted to fly naturally remains an obstacle in this endeavour. The jetpack must accommodate for all factors of flight such as su ficient li t and to some extent, stabilisation. And, of course, the soldier operating the jetpack must be taught to fly it.

engineers and scientists would stay in India and

In conclusion, Maj. LS Chauhan wishes to give the following message to the students – “I wish good

developed country.”⌅

work for the country instead of going abroad just for the sake of a higher pay. Being Indians, we should not leave our country. The joy of getting a fat paycheck is nothing compared to the satisfaction derived from serving the country, say by working in the army. My dear Indians, please stay back, work for the development of India and make it a strong and

The images are from Maj. LS Chauhan’s M.Tech. thesis. Meet the Author Sachin Nayak is a final year B.Tech. student of the Department of Electrical Engineering at IIT Madras. He loves microcontrollers, coding, swimming, running and, of course, reading and writing. He has been part of several organisations in the institute like the CFI Electronics Club, The Fifth Estate, Shaastra, etc. He plans to pursue graduate studies in Computer Science in the near future.

| Aerospace Engineering


Through the Looking Glass

By Ayyappadas A M

How digital photoelasticity research empowers the mobile revolution

I

magine a research group consisting of mechanical engineering graduates and headed by a world renowned expert in the subject area of experimental stress analysis. Could they be silently empowering the mobile, IT revolution and technology convergence, by doing engineering research from their subject domains? This is the story about such a technology bridge, made possible through furthering the understanding of science and its applications. Their story o ten goes unsung, although their work shines out from the smiles you capture through Instagram or the high quality videos played from your BluRay DVD. We will see how diverse technologies such as image processing, numerical computing, and stress analysis have come together to help the mass production of optical devices.

Let there be light The history of science and technology is replete with examples of cross-disciplinary application of scientific principles which have accelerated technological developments. In this case the common thread happened to be light. In fact, it is interesting to note that the phenomena which finally came to draw the demarcation line between classical and modern physics also links di ferent disciplines. The digital photoelasticity research group, headed by Prof. K Ramesh from the Department of Applied Mechanics, IIT Madras, are the people behind this impressive feat. Their journey began through the Indo-European Union project under FP initiative known as SimuGlass. They started o f with a straight forward, though not necessarily simple, objective - to validate the results from a finite element simulation

| Immerse


Dr. K Ramesh is currently a professor at the Department of Applied Mechanics, IIT Madras. He was its Chairman during (2005-2009) and formerly a professor at the Department of Mechanical Engineering, IIT Kanpur. In recognition of his significant contributions in photoelastic coatings, the F. Zandman award was conferred on him by The International Society of Experimental Mechanics in 2012. He is a Fellow of the Indian National Academy of Engineering (2006).

tool developed for predicting the residual stresses produced in optical lenses during the manufacturing process known as precision glass moulding. The challenges faced during their research made them rise to the occasion by unravelling important insights about the physics behind the phenomena of residual stress in moulded lens. Their work has the potential to make the production of optical lenses much more e ficient, adding to the mobile revolution.

that through the project “the state-of-the-art of manufacturing in India and parts of Europe, which follows the grinding and polishing route, will be replaced with the advanced precision press forming route, especially for manufacturing the advanced optical elements.”

Prof. K Ramesh has been involved with research in digital photoelasticity for more than two decades. He has produced milestone papers in the subject area, and is recognised as an authority in his field. Therefore, it is not surprising that he is a key collaborator in a high impact industry project involving European and Indian institutes and industrial partners. The European partners of the SimuGlass project consisted of two academic research institutes - Fraunhofer Institute for Production Technology IPT and Centre de Recherches de l’Industrie Belge de la Ceramique and two industry partners – Kaleido Technology and EcoGlass, while the Indian partners include Central Glass and Ceramic Research Institute (CGCRI), Indian Institute of Technology Madras (IITM), Indian Institute of Technology Delhi (IITD) and Bharat Electronics Limited (industry partner). The Indian wing of the project was headed by Dr. Dipayan Sanyal from CGCRI. The stated aim of the project was to enhance the understanding of the precision glass moulding process and to increase the quality of the process by developing an integrated Finite Element tool. It was also envisaged

Precision Glass Moulding

Their work has the potential to make the production of optical lenses much more e ficient, adding to the mobile revolution.

As a collaborating institute, the role of IIT Madras was initially limited to the measurement of residual stresses in optical lens using methods from photoelasticity and validation of the finite element method tool for process optimisation. Tarkes Dora P and Vivek Ramakrishnan, PhD students working with Prof. K Ramesh at the Digital Photoelasticity Lab, Department of Applied Mechanics, have been involved with the project from the time they joined the Indian Institute of Technology Madras. Tarkes, whose PhD work is almost exclusively based on the research done under the auspices of the project, had this to say: “you know, it was like solving a long puzzle, by taking one challenge a ter another in diverse fields, until a larger picture emerged.” So what is precision glass moulding and why is it relevant? Tarkes replies, “The conventional manufacturing process of optical lenses is a multi-step and highly time consuming process. And is that all? “No”, Tarkes continues, “The digital revolution has unleashed a new market for such precision glasses. In today’s world, everyone

| Applied Mechanics


Steps involved in Precision Glass Moulding

with a mobile phone is an amateur photographer. The convergence has come to the level that one or more optical devices have become standard in most consumer electronics, and particularly in the communication devices. The demand for these devices has been exponential in the past decade. This has created a situation where dependency on conventional production process involving grinding and polishing has become virtually impossible.”

It was like solving a long puzzle, by taking one challenge a ter another in diverse fields, until a larger picture emerged. Precision glass moulding involves five steps as

shown in the figure. First the cold glass blank with a defined geometry, called gob, is loaded into the mould. The oxygen is removed from the working area, followed by nitrogen filling. The whole system is then heated. The variables that a fect the quality of the product - temperature and the force applied come a ter this stage. The glass is heated to a point slightly above what is known as glass transition temperature. Glass transition temperature is the temperature at which glass changes from a hard and relatively brittle state into a molten or rubber-like state. The value of this depends on the composition of the glass. Optical glasses have low transition temperatures, typically between °C and °C. The mould quality degrades due to wear and tear if glasses with higher transition temperature ( °C) are used.

| Immerse


A ter heating, the glass gob is pressed to the desired shape. The whole assembly of lens and mould is cooled to room temperature. The final lens can be directly used for the desired application. Residual stress is the internal stress distribution locked into a material. These stresses are present even a ter all external loading forces have been removed. Residual stresses are developed in the lens during the cooling. In conventional method, the glass lens is annealed, (a heat process that involves slow cooling) intermediately. This minimises the residual stresses. There is also the issue of shape deviation in the lens. Repeated trial and error experimentation, in order to minimise the residual stresses and shape deviation is prohibitively expensive, given that a large cost is involved in the manufacturing of the moulding tools. Very few companies such as Aixtooling GmbH produce the ultra-precise high quality moulding tools. Thus, numerical computation with finite element method has become the obvious choice to get the optimised shape of the mould and for selection of thermal cycles.

other, while using the same process. They overcame the precision related hurdles by devising two experimentation techniques. The team proposed a new experimental technique using Carrier Fringe Method for photoelastic calibration of glass for higher values of photoelastic constants. Also for relatively low photoelastic constant materials, a new experimental method involving phase shi ting technique was devised.

The Finite Element Method based Process Optimisation The accuracy of the final product is of utmost importance as far as precision moulding is concerned. The conventional method for obtaining high accuracy has been based on trial and error, and domain expertise of the process engineer. The SimuGlass envisaged at changing this paradigm. And indeed, they found a breakthrough. A simpler method to obtain the optimised design of the mould with submicron-form accuracy (shape deviation < micron) was proposed by the IITD and IITM teams in .

Manufacturing meets photoelasticity As soon as the team at IIT Madras took a plunge into the problem, they encountered several hurdles. It was one thing to do photoelastic measurements with plastic materials. Optical glasses were, however, a totally di ferent territory. The fringe widths to be measured were in microns as compared to millimetres, and the requirement of precision was also higher. Glass is a weakly birefringent material. The fringe order usually observed is less than one. Measurement of birefringence in glass up to nanometers is possible using an instrument called automatic polariscope (for e.g.: manufactured by M/s GlassStress Ltd., Estonia which uses phase shi ting technique). There were ‘low’ and ’high’ photoelastic constant glasses, whose stresses are to be measured. The accuracy of measurement su fers when the range of values go from one extreme to the

Bench-Top Precision Glass Moulding Apparatus Source: PGPL, Isfahan University of Technology

Finite Element scheme for simulation and analysis of the moulding process involves several other challenges. The thermo-mechanics of glass involves two aspects - viscoelasticity and structural relaxation. Viscoelasticity models are available in commercial so tware packages like ABAQUS. The structural relaxation has to deal with the nonlinear thermal expansion. There was no standard method

| Applied Mechanics


Photoelasticity Photoelasticity is an experimental method to determine the stress distribution in a material. It is advantageous because of its ability to give a fairly accurate picture of stress distribution, even around abrupt discontinuities in materials. A typical photoelastic measurement setup consists of a light source, a polariscope, a transparent material which is to be analysed, and an image capturing device. First described by the Scottish physicist David Brewster, the method has come a long way to become a very important tool in engineering and research.

available in commercial packages for solving this problem with su ficient accuracy. Tarkes from the IIT Madras team in collaboration with IITD Team developed the code for solving the structural relaxation problem. At the beginning of the project the use of digital photoelasticity was only envisaged as a validation tool. But soon this was to change. The key issue was that the physics of heat transfer involved in the moulding process was not well understood. This is to say that the heat transfer mechanism between the mould and glass, the glass and the surrounding N2 atmosphere should be known for sure, if the numerical computation results were to be reliable. The experiments were conducted in the facility at Fraunhofer Institute for Production Technology, Germany. When the results from the finite element model were compared, it was evident that the assumptions made about the heat transfer mechanism for glass and N2 interaction was far from accurate. This prompted them to relook the problem from the heat transfer point of view. They did a computational fluid dynamics simulation of the cooling stage of the moulding process. This revealed that the mechanism was not likely convection, as it was believed or assumed previously.

Tarkes analysed the temperature measurement data and paved the way to account the actual heat transfer mechanism. They ended up using digital photoelasticity values as a significant input parameter by which they are able to predict more accurate values to be used in the finite simulation to account this heat transfer mechanism.

The Bottom Line In business everything comes down to a bottom line. I asked the team how their work fits into the larger frame. And this is what they had to say: “the accuracy of the simulation has been of the order of - %. The work done by our group has potentially improved this to - %. Most importantly, the science of the process is now better understood. The discovery that the cooling of the lens in the precision glass moulding does not happen through convection is a breakthrough. An increment of accuracy by % is one significant step towards massive improvement in the mass production levels.� In short, thanks to such collaborations, all of us are able to get higher resolution cameras in our mobiles. More importantly, this will go a long way in improving the quality of optical lenses, which has a direct impact on clinical fields. ⌅

Meet the Author Ayyappadas A M is a PhD scholar working in Fluid Dynamics, at the Department of Applied Mechanics. Apart from the science-y stuff he does, by way of research and personal interest, he is an avid reader of philosophy and history. Infrequent blogger, but mostly harmless.

| Immerse



I

remember my first visit to the Himalayas, standing thousands of feet above sea level. It was a spectacular view – dense green forests, massive rock cli fs, chains of silvered peaks and grasslands on the banks of the Mandakini river that ran deep and silent. Just imagining that the ground below me while standing there starts shaking at an acceleration approximately equal to one-fourth of that of a free falling object gives me goosebumps. Although qualitatively similar, it certainly is not a ride at an amusement park. This exact phenomenon has happened in the region around Kedarnath temple multiple times, the most recent one being in . The temple is located near the Chorabari Glacier, one of whose two noses is the source of Mandakini River. This terminates at Chorabari Lake which is approximately metres long, metres wide and - metres deep and is situated about kilometres upstream from Kedarnath. On June , heavy rainfall together with the melting of the snow from the glacier led to the complete draining of the lake within a matter of minutes. Once the lake had burst, the water carried along mud and debris down the valley and caused massive devastation to the entire town downstream.

Heritage conservation doesn’t mean freezing a building in time or creating a museum. However, there was no major damage to the temple structure. It is immensely strong, made of thick, massive granites and high-grade ’metamorphic gneissic’ rock slabs, pillars and bricks. Simply put, these are rocks that changed form due to heat and pressure while buried deep below the earth’s surface, have banded appearance and are made up of mineral grains, typically quartz minerals. They have a bright sheen, are rough to touch and very hard. The walls pillars are about . metres thick. The roof is an assemblage of multiple blocks with dressed stone on the exterior. This is one of the major reasons that the temple withstood the earthquakes.

The survival of the temple can also be attributed to the presence of a man-made platform which raises the temple super-structure that prevented the temple from the direct flow of gushing floodwater.

Chorabari lake and Kedarnath town. Courtesy: Dr. Menon

India has one of the largest stocks of heritage structures in the world out of which are formally recognised by the United Nations Educational, Scientific and Cultural Organisation (UNESCO). Of these sites, are cultural properites and are natural properties. Formal systems that recognise conservation of heritage structures as a multidisciplinary engineering e fort do not exist in India. Heritage conservation doesn’t mean freezing a building in time or creating a museum. Instead, it seeks to maintain and thereby increase the value of buildings by keeping their original architectural elements, favouring their restoration rather than replacement and, when restoration is impossible, recreating scale, period and character. Addressing the task of understanding and protecting heritage structures from natural hazards, ageing and weathering e fects is a serious problem in India supplemented by the lack of adequate quality and quantity of manpower.

| Immerse


Dr. Arun Menon is an Assistant Professor at the Department of Civil Engineering at IIT Madras, where he has been since 2010. He also currently serves as the Convener of National Centre for Safety of Heritage Structures (NCSHS). His research interests center on structural conservation of historical monuments which include seismic response, assessment and retrofitting of masonry structures, historical seismicity and seismic hazard analysis. He received his M.Tech. in Civil Engineering from IIT Madras. He received his M.Sc. and PhD from ROSE School, University of Pavia, Italy.

With the intent of beginning a formal approach to address the safety of heritage structures, the National Centre for Safety of Heritage Structures (NCSHS) was established at IIT Madras in July with Dr. Arun Menon, a professor in the Structural Engineering Laboratory at the Department of Civil Engineering, designated as the Convener. Dr. Menon tells me that NCSHS is envisioned as a long-term programme towards addressing the challenge of ensuring structural safety of historical monuments and other heritage structures in India. The plan is to collaborate with implementing agencies such as Archaeological Survey of India (ASI) which would help in fundamental research and education.

Most historical structures have multi-leaf walls which are composed by one or more external and internal leaves. Dr. Menon’s primary research interest is the seismic vulnerability assessment of building structures. The Indian subcontinent has a high seismicity - the frequency of earthquakes in a region. The country has been divided into seismic zones (Zone , , and ) where Zone and Zone expect highest and lowest levels of seismicity. Kedarnath temple, situated on the Garhwal Himalayan range near the Mandakini river in Uttarakhand, stands in Zone . Its neighbourhood has seen several earthquakes in the recent past such as in (Uttarakashi) and (Chamoli). Again, in majority of the cases, no major damage to the temple structure was reported.

The temple is believed to have been built in the th century A.D. in Nagara architectural style. The Shastras, the ancient texts on architecture, classify temples into three di ferent orders; the Nagara or ‘northern’ style, the Dravida or ‘southern’ style, and the Vesara or hybrid style which is seen in the Deccan between the other two. The Nagara style’s primary feature is a central tower (shikhara) whose highest point is directly over the temple’s primary deity. This is o ten surrounded by smaller, subsidiary towers (urushringa) and intermediate towers; these naturally draw the eye up to the highest point, like a series of hills leading to a distant peak. Setting the temple on a raised base (adhisthana) also shi ts the eye upward, and promotes this vertical quality. The type of damage in a masonry wall depends on the relative alignment between the direction of the ground shaking and the wall. If the ground shaking is perpendicular to the wall, the most commonly found cracks are vertical and if the ground shaking is parallel to the wall, the cracks are usually diagonal.

Out–of–plane collapse mechanism. Courtesy: Dr. Menon

Both of these lead to formation of di ferent failure mechanisms – analysis of defect in design, quality and other parameters which led to the failure

| Civil Engineering


of the process. In the former case, an out–of–plane collapse mechanism is observed in which case the wall leaves may be detached or the entire wall may overturn. In the latter case, an in-plane-shear mechanism is realised and the entire column snaps. Masonry structures typically show two types of mechanisms under earthquake ground shaking, namely global mechanisms and local mechanisms. Global mechanisms, which are typically in-plane shear response of structural walls, occur when the masonry structure is constituted of walls and floors/roofs that are well connected to each other. In-plane shear response in masonry walls is characterised by diagonal (x-cracks) or horizontal cracks. In most ancient masonry constructions, connections between structural elements and between roofs/floors and walls are poor, and these lead to out-of-plane mechanisms, or local mechanisms. Out-of-plane mechanisms are characterised by out-of-plane bulging/push out or overturning of walls, parapets and other free-standing elements. The project started in with the aim of restoring the damaged parts of the temple and preventing damage against future events. There are a lot of challenges in structural assessments of heritage structures. Prior to a physical inspection of a historic structure as much information as possible about the structure must be gathered. Not only is it important to understand the actual structure of the historic building in question, but also the times in which it was built.

Multi-leaf walls. Courtesy: Dr. Menon

Most historical structures have multi-leaf walls which are composed by one or more external and internal leaves. External leaves contain stonework or brickwork and internal leaves are usually made of rubble masonry or very weak infill materials, such as earth or loose material. Dr. Menon mentions the lack of material characterisation of the inner and outer leaves of these walls. For example, the exterior and interior of the wall at Kedarnath temple are gneiss stone leaves and the infill is rubble stone, which is irregular in shape, size and structure. Since these are not appropriately characterised, it is di ficult to carry out further investigation. Other challenges include unavailability of proper geometrical/architectural drawings and poor understanding of ancient construction practices, especially the sequence of construction: contact and connection between external and internal leaves.

Other challenges include unavailability of proper geometrical/architectural drawings and poor understanding of ancient construction practices. Dr. Menon and his team have visited the temple four times between June and June for geophysical studies, structural investigations and preliminary structural health monitoring. There is a smile on his face as he mentions that they were allowed to enter the temple premises during the nights a ter all the pilgrims had le t, through a small gate at the back. During their visits, the team used a lot of interesting equipment for analysis. An endoscope was used to analyse the walls of the temple and the inner structure was found to contain voids in the core masonry. The endoscope is very similar to the one a doctor uses to check the interior of the body. Another instrument, an infrared camera, which takes pictures of the radiation conditions, was used. It is similar to a common camera that forms an image using visible light. Instead, infrared camera forms images using infrared radiations. By recording surface

| Immerse


Courtesy: Dr. Menon

temperatures under di ferent exposure conditions, hidden voids and cavities were detected in the temple structure. Structural analysis of a structure is done by employing various techniques. Di ferent forces, deformations and accelerations are applied to the structure and its components to assess their e fects since excess load may cause structural failure. These applied forces are called loads. One type of load is a dead load which includes loads that are relatively constant over time, including the weight of the structure itself, and immovable structures such as walls and plasterboards. The technique of subjecting the structure to dead load to assess its e fect is called Gravity Load Analysis. When this was used on the temple, it was concluded that the structure has very high safety margin against gravity loads.

...the inner structure was found to be deteriorating ...hidden voids and cavities were detected in the temple structure. Another load which the temple’s structure was subjected to is the lateral load. These loads are live loads (temporary or of short duration such as the load due to wind) whose main component is a horizontal force acting on the structure. Most lateral

loads vary in intensity depending on the structure’s geographic location, structural materials, height and shape. Dr. Menon’s team considered hydrostatic pressure to complete the lateral load analysis and found that the pressure has no damaging e fect on the structure. And that the structure is safe against hydrostatic pressure. The temple was also subjected to gravity loading and a monotonic displacement-controlled lateral load pattern which continuously increases through elastic and inelastic behaviour until an ultimate condition is reached. This method of analysis is called Pushover Analysis. And again, it was concluded that the structure is safe against such loads. The pushover analysis is a method to establish the capacity of the structure. Pushover-based seismic assessment (comparing demands to capacity), showed that the structure was safe against lateral loads. The magnitude of acceleration mentioned in the first paragraph is a measure of how hard the earth shakes at a given geographic point. It is known as the Peak Ground Acceleration (PGA). A ter all the analysis, it was found that the temple structure is safe against earthquake ground motion below . g. It was also found that the timber sloped roof over the SabhaMandapa and inner mandapa (mandapas are halls in the temple) is a poor quality construction and

| Civil Engineering


is vulnerable to earthquake shaking. The front gable wall (the triangular portion of a wall between the edges of intersecting roofs) has shown significant dislocation of stone blocks. To safeguard it from future earthquakes the reconstruction of the stone masonry supporting walls of the truss and gable walls has been proposed. The proposal includes introduction of timber band beams with timber cross ties along the four sides of the SabhaMandapa above the stone wall to ensure integral action of the entire structure in the event of earthquake shaking. The gable walls would also be provided with band beams to ensure greater out-of-plane resistance that is required under earthquake shaking. Prof. V Kamakoti from the Department of Computer Science and Engineering at IIT Madras has worked on a wireless sensor network for structural health monitoring. It is desired to monitor the inclination/tilt of the temple structure over a period of time. The process is automated and periodic. The sensors will be installed at Kedarnath temple and will send periodic readings every fi teen

minutes to a remote server located at IIT Madras. It is engineered to wake up upon an impact of . g and is designed to work under sub-zero temperatures up to – o C. Network connectivity seems to be an issue. BSNL is being used which works from am to pm and there is no certainty that the power during the winter season when the temple is shut down for six months will remain on. The work has been challenging for the team because of unpleasant weather at the site which is strikingly di ferent from the weather in Chennai where the team stays for better part of the year. A lot of work needs to be done in the northern region around Zone . There is a

km central seismic

gap in the Himalayan front which also includes Uttarakhand. A recent study (

), claims that there

is su ficient energy stored in the ongoing tectonic process which could generate earthquakes of high magnitudes as high as touching

on the Richter

scale. But the time of the event cannot be predicted. It might occur tomorrow, or

years later. The only

certain thing is it is going to happen and we need to be prepared. ⌅

Meet the Author Sanket Wani is a final year student pursuing B.Tech. in Chemical Engineering at IIT Madras. He can usually be found browsing popular science content on the internet. Of late, he has developed an interest in science writing. He also takes a keen interest in eating at fancy restaurants and watching football.

| Immerse



M

uch of chemistry deals with the study of the interactions between di ferent types of matter. Such interactions are easily imaginable between two or more fluids as the molecules of one fluid have the freedom to occupy the empty spaces between the molecules of the other. But what happens when the candidates involved are solids? Due to their physical constraints, two solids do not interact until you provide them a medium that facilitates such interactions. More o ten than not, these mediums are liquids called solvents, the excess liquid phase in which one or more solids (now called solutes) are dissolved to form a homogenous solution. With a global market worth billions of dollars, solvents are an essential part of many sectors of the economy, including manufacturing, processing and transportation. Despite being critical in addressing some of the most important problems in chemistry research as well as other challenges that society is currently facing, solvents also raise many environmental, health and safety issues. Firstly, most solvents come from finite sources, such as petrochemical or fresh-water resources. To add to this, the life-cycle of a solvent, right from its manufacture to its disposal, requires a considerable amount of energy input. Also, most of the traditional non-aqueous solvents, i.e., those other than water, such as benzene and toluene, tend to be toxic and evaporate easily (also called volatile). This makes them di ficult to store and transport, and eventually poses threats of atmospheric pollution and accumulation in living organisms. Given these drawbacks, there are several challenges that need to be overcome for the shortand long-term usage of solvents.

In the pursuit of more environmentally friendly solvents, scientists stumbled upon a class of molecules called ionic liquids. Ionic liquids are non-volatile molecules that seem to be able to dissolve everything, and thus have the ability to replace the conventional manufacturing medium. Sounds magical, doesn’t it? But as the great science fiction writer, Arthur C. Clarke pointed out – “Magic is nothing but science that we don’t understand yet.” And this magic is what Prof. Sanjib Senapati’s group at the Computational Biophysics laboratory, Department of Biotechnology, are trying to unravel – figuring out what makes ionic liquids (or ‘green solvents’ as they have come to be known in recent times) a chemical possibility and gives them these special powers.

With a global market worth billions of dollars, solvents are an essential part of many sectors of the economy, including manufacturing, processing and transportation. Ionic liquids have a very unique chemistry. They are composed of ions like all salts, but, unlike conventional salts, such as sodium chloride, which have ions of comparable sizes, in an ionic liquid, the ions di fer a great deal in their sizes. The comparable sizes of the ions in a conventional salt lead to strong electrostatic interactions between them, allowing them to pack together uniformly, giving rise to a regular lattice structure. This does not happen in an ionic liquid, and therefore, they remain liquids at room temperature.

Prof. Sanjib Senapati received his M.Sc. degree in Physical Chemistry from University of Calcutta, Kolkata. He obtained his PhD from IIT Kanpur. He has worked as a Post Doctoral Fellow at University of North Carolina, Chapel Hill and University of California, San Diego, USA. In 2005, Prof. Senapati joined the Department of Biotechnology, IITM where he is currently a full Professor. Apart for ionic liquids, his other research interests include identifying drug targets and designing novel drugs for HIV and heart disease.

| Immerse


Prof. Sanjib Senapati with his research group in the Department of Biotechnology at IIT Madras. Courtesy: Prof. Sanjib Senapati

So, to put it in simple words, ionic liquids are salts that remain liquids at room temperature. Interestingly, since the ions involved in ionic liquids are large, a part of them is polar, or in other words, charged, whereas the other is non-polar. By rule, polar solvents only dissolve polar solutes, whereas, non-polar solvents only dissolve non-polar solutes. Now since ionic liquids have both kinds of components, they have the advantage of being amphiphilic – they can dissolve just about anything, even cellulose.

MD simulations allow one to see exactly what is happening at the molecular level in the course of a reaction. Prof. Sanjib Senapati, a theoretical chemist by training, focused on the application of chemistry in the field of biology during his post-doctoral days. Here, he got introduced to the fascinating field of ionic liquids and has since been studying them keenly. His lab, recently declared one of the best performing bioinformatics facility in India by DBT Biotechnology Information System Network, approaches research problems using state-of-the-art molecular dynamics simulations.

By using structural parameters (properties of a molecule such as lengths, angles and planes between the atoms) to define the interactions between molecules and computer codes to replicate the conditions of a lab experiment (such as temperature, pH, salt concentration and atmospheric pressure), they are able to simulate what goes on inside a test tube on a computer screen. The advantage of simulations over test tube is quite evident; MD simulations allow one to see exactly what is happening at the molecular level in the course of a reaction. Debostuti, one of the graduate students working on ionic liquids at Prof. Sanjib’s lab is amused at the naivety of our query when we ask her, “But what have ionic liquids got to do with biology?” She replies, “Well, there is much more to ionic liquids than meets the eye. A ter ionic liquids became a super hit in the manufacturing industry, the next big step for the industry was quite obvious – to try their hands on ionic liquids in enzymatic catalysis.” Catalysis is the speeding up of a chemical reaction in the presence of an additional participant called the catalyst. When bio-molecules act as such catalysts, the bio-molecules are termed as enzymes and the reaction is termed as enzymatic catalysis. She then

| Biotechnology


continues, “But, to be able to carry out an enzymatic reaction in ionic liquids, it becomes important that the enzymes themselves are stable in ionic liquids first. That is where the stability of bio-molecules in ionic liquids came into the picture.” Stability of bio-molecules such as DNA and proteins is one of the most critical issues that research in bio-sciences faces every day. These bio-molecules are delicate entities, with very short half-lives when stored at room temperature in water, their natural solvent. This is either due to hydrolysis (explained later) or the action of degrading enzymes, both of which cause their breakdown. Long-term preservation of bio-molecules is carried out by storing them at refrigerated conditions of - o C to - o C, temperatures which reduce the rate of hydrolysis and render the degrading enzymes inactive. Common sights in all biology labs are rows of refrigerators which house many precious samples. The failure of these refrigerators can lead to very significant losses – sometimes the whole career of a scientist is lost, sometimes irreplaceable samples such as rare cancer tissues are lost and sometimes, arrays of samples for future experiments are lost. The maintenance of samples at such ultra-low temperatures is therefore crucial and adds significant costs and concerns to research. It is in fact ironic that this article was written while guarding a - o C freezer which threatened to go o f during the Chennai floods power outage.

Stability of bio-molecules such as DNA and proteins is one of the most critical issues that research in bio-sciences faces every day. In such a scenario, the evolution of ionic liquids to the latest generation of bio-compatible ionic liquids ushered in the promise of an economical and hassle-free solution to this age-old problem. Research in this field was pioneered by Dr. Prabhakar Ranganathan, an alumnus of IIT Madras, and his group at the University of Monash, Australia.

They found that when DNA is stored in these bio-compatible ionic liquids, the structural features of DNA were maintained even a ter six months of storage at room temperature as opposed to a few weeks in water!

DNA structure Courtesy: Dept. Biol. Penn State ©2004.

DNA is one of the three common bio-molecules (DNA, RNA and proteins) and functions as the instruction manual for our body. These instructions are, however, not written in the alphabets of English, but in a code which has only four alphabets – Adenine, Cytosine, Guanine and Thymine – collectively known as nitrogenous bases. As can be seen in the figure, DNA has a very interesting structure. It is a double helix formed by two anti-parallel single strands made up of these nitrogenous bases. The backbone to these nitrogenous bases is provided by alternating sugar and phosphate groups. These two single strands are held together by hydrogen bonding, or in other words, the attraction between the positively charged hydrogen atom on one strand and another negatively charged atom such as nitrogen, oxygen or fluorine on the other. The two strands twist around each other with an o fset pairing, resulting in the formation of two kinds of grooves – major and minor: these are structurally opposite to one another and run alternately along the entire

| Immerse


Figure 1. Left: Water (in orange) surrounding DNA (in blue) in a 5 wt % ionic liquid (in green) solution. Right: Ionic liquids take the place of water in a 80 wt % ionic liquid solution. Courtesy: Prof. Sanjib Senapati

length of the DNA. This double helix of the DNA is provided structural support by a single layer of water molecules, referred to as the ‘spine of hydration’, which remain hydrogen-bonded to the DNA in its minor groove. Another way in which water lends structural support to the DNA is via the ’cone of hydration’ – small clusters of water that surround the backbone of the DNA, the alternating sugar and phosphate groups.

...water is both a friend and a foe to DNA. Interestingly, the same water molecules fail to stabilise the DNA when it comes to its covalent bonding. They disrupt these bonds, breaking the DNA down to smaller fragments which are again amenable to further break down right up to the monomeric components, the nitrogenous bases. This is called hydrolysis. In order to circumvent hydrolysis, two strategies are attempted in DNA storage – it is either stored in a dry state, i.e., it is dehydrated, or in water but frozen, so that the rate of hydrolysis diminishes rapidly. However, both the methods are not foolproof. With every dehydration attempt or a freeze-thaw cycle, DNA, being made

up of extremely long, thin strands, tends to break, leading to structural damage. From the study by Dr. Ranganathan sprouted many unanswered questions that excited Prof. Sanjib and group. They figured out that MD simulations could serve as highly valuable tools to explore what ionic liquids are doing to the DNA to be able to have such a stabilising e fect on it. They found that on introducing ionic liquids into a setup of DNA surrounded by water, the ionic liquid molecules gradually start replacing water, and a ter about to nanoseconds, there are very few water molecules le t surrounding the DNA (Fig. ). Now, this was both a cause for relief as well as worry because water is both a friend and a foe to DNA. Interestingly, ionic liquids replace both the spine and the cone of hydration in the DNA. This should actually lead to the entire DNA structure collapsing; but with ionic liquids it does not, because of their ability to hydrogen-bond with the DNA the same way that water does. The result is the formation of a spine and a cone of ionic liquids that now support the DNA structure instead of a spine and a cone of hydration (Fig. ). So, we have something that gets rid of most of the water and,

| Biotechnology


Figure 2. (a) Spine of hydration in the minor group of DNA. Emergence of spine of ionic liquids in (b) 5 wt % and (c) 80 wt % ionic liquid solutions. Courtesy: Prof. Sanjib Senapati

therefore, reduces the rate of hydrolysis to almost negligible while at the same time mimicking the supporting role of water. It’s a win-win situation! At this point, Debostuti acknowledges one of the drawbacks of using MD simulations for a study like this – that a technique with a timescale of nanoseconds cannot really predict whether the DNA will remain stable for years to come or not. So, to substantiate the findings of their MD simulations, the group stored the DNA in a large number of ionic liquids at di ferent concentrations and at room temperature. The DNA remained stable even a ter a year of storage at room temperature. Despite such positive findings, the use of ionic liquids as storage molecules is a field that has not yet been embraced by the scientific community. A major reason for this is that while ionic liquids have been found to be fantastic hosts for DNA, their track record with proteins has been patchy. Protein structure involves a great amount of diversity and complexity as compared to DNA structure. Given this, while researchers are now able to recommend di ferent ionic liquids for storing di ferent kinds of proteins, they have still not reached a stage where they have been able to pinpoint a single universal ionic liquid as being apt for protein storage. Nevertheless, the search is surely on.

More importantly, the field of study is still in its infancy, where people have figured out the pros of the technique, but are still not sure about the cons – such as any likely adverse e fects due to the long-term storage of bio-molecules in ionic liquids. Debostuti’s current job is screening out as many ionic liquids from as many di ferent classes as possible for their potency in stable storage of DNA. The goal is to be able to make a confident statement one day: “Yes, all ionic liquids are good for long-term DNA storage.”

Ionic liquids get rid of most of the water and, therefore, reduce the rate of DNA hydrolysis while at the same time mimicking the supporting role of water in the maintenance of DNA structure. Meanwhile, the group is also exploring what happens to DNA in an ionic liquid under conditions of environmental stress such as high temperature, which is known to melt DNA. The results of their preliminary study are exciting – the melting temperature of DNA in ionic liquids is way higher than its melting temperature in water, which means that DNA in ionic liquids is resistant to higher temperatures.

| Immerse


Besides all this, Debostuti also sees applications of ionic liquids in many other fields, including extraction and separation technologies and, of course, the hot topic of drug delivery. However, the high viscosity of ionic liquids and their lack of specificity are proving to be challenges in the way, which scientists are trying to overcome in order to help these miracle solvents make their mark.

the project initially, she was sceptical; DNA was

Debostuti is on the verge of her thesis submission. She admits that when she was o fered

DNA and there is so much more to be done!” And we

never her ‘comfort-zone’, she was more of a ‘protein’ person. But a ter five years of exciting work on ionic liquids and DNA, she seems to have changed her mind. She says in a very cheerful tone, “Today, if somebody asks me what I would like to do my post-doctoral research on, I would say DNA nanotechnology. Because, now, I can only think of wish her all the best with that! ⌅

Meet the Author Kiranmayi is a PhD student in the Department of Biotechnology. As part of her thesis, she studies the involvement of DNA changes in the development of hypertension and diabetes in Indian populations. A great admirer of the English language ever since she can remember, she aspires to be a technical writer after she completes her PhD and tries to find as much time as possible between her late nights at lab and her research seminars to keep up with this passion.

| Biotechnology


Algae on Fire

by Akshay Govindaraj

How complicated is the whole chemistry of burning fuels? What about that of algae burning? Furthermore, how do we tackle these di ficulties to harness the enormous potentials in microalgae? We talk about how extraction of fine chemicals and energy from algae is di ferent from other conventional sources.

A

lgae are largely single-celled organisms lacking roots, stems and leaves. Most algae, like plants, use photosynthesis to produce energy. Their simple structure makes them highly energy e ficient as well. They can vary in size from a few microns to large macroscopic multi-cellular organisms which can be up to metres in length and can grow as fast as centimetres a day. Microalgae generally refer to all those species of algae with sizes between and microns. Most of the estimated , + species of algae are expected to be unicellular and fall within this range. But why do we care about these tiny organisms? For starters, their absence would take your breath away, quite literally! It is a common misconception that most of the oxygen we breathe comes from forests. On the contrary, forests consume almost as much oxygen as they produce. It is estimated that about three-fourths of the oxygen we breathe comes from algae alone.

Perhaps more relevant to the state of the environment today is the fact that when nutrients are available in plenty, algae populations can grow very rapidly, doubling in number every few hours. They can then be harvested quickly to produce usable biofuel. Traditional crops would require times the land to produce an equivalent amount of biofuel as microalgae. Part of the reason why algae have survived on this planet for so long is that they adapt very quickly. This suggests that we might be able to grow algae in conditions where traditional crops fail to grow.

It is estimated that three-fourths of the oxygen we breathe comes from algae alone. The idea of fuel sustainability is considered to be the Holy Grail for global ecological health. But we are still quite far away from a state of sustainable growth, since we have thus far failed to find and utilise a source of energy that is renewable and can

| Immerse


Dr. R Vinu obtained his PhD in Chemical Engineering from Indian Institute of Science, Bangalore, in 2010. He has authored over 30 research papers, one book chapter and filed a patent. He is the recipient of Young Faculty Recognition Award for excellence in teaching and research from IIT Madras in the year 2015.

act as a substitute to fossil fuels. Some argue that usage of biofuels derived from algae will take us closer to that goal, since algae take in carbon dioxide from the atmosphere during the process of photosynthesis and e fectively close the carbon cycle. On the other hand, fossil fuels would bring additional carbon into the atmosphere. At the very least, we can say that algae based fuels are greener than fossil fuels. Other sources of energy like solar and wind have high capital costs, and since the demand for energy is only ever going to increase, it is unlikely that solar and wind will completely cater to them. We definitely need to find some other sources of energy. Some argue that microalgae might just be the perfect solution.

Use of algae as an ingredient in the manufacture of fine chemicals is a futuristic thought. In the conventional method of producing algal fuels, algae are fed carbohydrates and their secretions are collected. The high fat content in these secretions results in a high calorific value. It has been estimated that a total area the size of France would be enough to power the whole world’s energy demands. Algae can grow on land which is otherwise unsuitable for agriculture or even in water bodies; therefore it might be precisely what mankind needs to avoid the energy crisis without compromising much on our other dependencies on land. But Dr. R Vinu from the Department of Chemical Engineering, IIT Madras, believes that there is a lot more about algae we haven’t explored yet. He says, “Currently, uses of algae are mostly focused on generation of energy worldwide. Use

of algae as an ingredient in the manufacture of fine chemicals is a futuristic thought.” Recently, it has been observed that when algae are burnt in the absence of air (a process known as pyrolysis), the resultant chemical composition, which di fers from species to species, sometimes contains compounds which are of high value. These compounds can be used in pharmaceuticals, cosmetics and various other industries. Dr. Vinu and his team have been analysing the process of pyrolysis with several species of algae to try and understand the structure of the species and more importantly, find out if any valuable chemicals can be obtained using the same process in a large scale. In chemical engineering, there are two kinds of products manufactured. Bulk chemicals, including most petroleum products, are those which are manufactured in large quantities and usually have a continuous supply.

Fine chemicals, such as

ingredients for synthetic drugs and cosmetics, are much more valuable and are usually manufactured and marketed in relatively small quantities. Recent observations suggest that pyrolysis of algae can be used as a process to manufacture several fine chemicals. It has been known for a few years that direct algal pyrolysis of some algae species gives us an end product which is rich in a class of organic compounds known as aromatics, which are characterised by pleasant smells. These compounds are used extensively in the manufacture of various plastics, detergents and drugs including aspirin. But only recently, Dr.

Vinu and his team have

discovered that some species yield another class of organic compounds, called cycloalkanes. These compounds have a plethora of uses in fields like

| Chemical Engineering


Dr. Vinu’s group at IIT Madras. Courtesy: Dr.. Vinu

refrigeration, pharmaceuticals and the manufacture of other important chemicals.

But there are

several challenges that have to be addressed before the manufacture of these compounds can be commercialised. Before any reaction or a set of reactions can be conducted on a large scale, we must be able to mathematically simulate the whole process accurately. This gives us an idea as to how the whole process will react to any disturbance and also provides us with a quantitative estimate on the risks associated with the reaction, which is critical to prevent accidents. We depend on the field of chemical kinetics to provide us with an accurate mathematical description of the entire process. The set of all reactions expressed in the form of ordinary di ferential equations is known as a ‘mechanistic model’. The way chemical kinetics is taught in school is o ten misleading. It gives us the impression that all chemical reactions are highly predictable in nature and it is just a matter of finding their reaction mechanism to describe the process, which is true, in a way. To find the reaction mechanism the whole process has to be studied on a laboratory scale. But in practice, some processes such as pyrolysis of algae

are just too complex to get a complete mechanistic model. The number of reactions, products and intermediate species are o ten so high in number that it is close to impossible to find the exact reaction mechanism. In the large scale production of energy or manufacture of fine chemicals, an incomplete predictive mathematical model of the process can give results which are completely aberrant.

The number of reactions, products and intermediate species are o ten so high in number that it is close to impossible to find the exact reaction mechanism. The team faced a similar problem during the initial study of biomass pyrolysis. It was later understood that the problem could be simplified using predictive models which are mathematically simpler. Predictive models o ten include only a few reactions which have high reaction rates or which give rise to more reactive species in the process while ignoring the other reactions. You can choose a predictive model whose complexity is commensurate with the accuracy needed. The search is on for more accurate predictive models for algal pyrolysis. But these predictive

| Immerse


models are likely to be specific to some species since di ferent species are di ferent in their inherent chemical nature and result in di ferent end product compositions even under similar conditions. Most species of algae are likely to give only a small fraction of compounds which are economically valuable amongst a deluge of by-products and since value is only associated with pure chemicals, they must be separated before they are ready to be used. But separation is o ten a di ficult task. What makes separation di ficult? According to thermodynamics, the process of mixing two pure components is o ten a spontaneous process, especially for compounds which are similar in structure. Since mixing is a spontaneous process, the exact opposite of it is not. Additional energy of a higher grade must be provided to separate the components of the mixture. The process of pyrolysis will o ten result in compounds of similar nature which are even more di ficult to separate. It has been observed, however, that algae have high nitrogen content and when burnt in the presence of air directly, release excessive amount of nitrogen oxides. Besides this, oil obtained

from pyrolysed algae will probably not be usable as transportation fuel since most conventional internal combustion engines are only suitable for fuels which have a specific behaviour. The algal based fuels can’t be modified to something which has a chemical composition similar to the conventional diesel or gasoline, but in the future we might reduce our dependence on conventional sources of fuel and build internal combustion engines which are designed for algae based fuels. Direct pyrolysis of algae may still be useful to generate energy in power plants. The whole idea of using algae to manufacture fine chemicals is still in its infancy and has a long way to go. So far, only a handful of algae species have been studied, but it is estimated that there are more than seventy thousand other species, and perhaps many of them can be used to manufacture more useful organic compounds. Today, most of the organic chemicals are obtained from sources like crude oil which are exhaustible. But given how quickly algae can grow, in the future we might only depend on algal sources to meet most, if not all, of our demand for organic fine chemicals. ⌅

Meet the Author Akshay Govindaraj is a student in the Department of Chemical Engineering at IIT Madras, whose interests are in certain areas of applied mathematics. While working on this project he understood much more about how applied mathematics is useful in chemical engineering. For comment or criticism, he can be reached at akshaygvdrj@gmail.com

| Chemical Engineering



I

n , in a lecture entitled There is Plenty of Room at the Bottom, the renowned physicist Richard Feynman said “ . . . But I am not afraid to consider the final question as to whether, ultimately, in the great future, we can arrange the atoms the way we want; the very atoms, all the way down!” What Feynman did in his lecture was to explore the possibility of advanced synthetic chemistry by direct manipulation of individual atoms and molecules. The conceptual insight, though revolutionary, was not able to generate enough waves in the scientific community, at least initially. It was not until the late s that the technology could reach a stage where molecules and atoms could really be controlled and engineered by direct engagement in their level. The scale of interest, as one can imagine, is exceedingly small or nano as we know it now. For comparison, human hair is about , , units wide on nanometer scale. The ‘Nano’ revolution ushered in synthesis of several brand new molecules and structures of nano size with remarkable properties and hence diverse applicability. The field of nanotechnology grew rapidly in the years that followed and had already seen two Nobel Prizes by the end of the next decade. One such man-made nano structure which has received considerable popularity since its synthesis in , is dendrimers and its assemblies. These are nanoscale molecules with beautifully symmetric and repetitively branched structure. A cursory search for ‘dendrimer’ on the Web of Science (Thomson Reuters) database produces more than , results. Here at IIT Madras, the e forts to prepare them and other light weight molecules for large assemblies to be used in wide variety of

applications is spearheaded by Dr. Edamana Prasad with his group in the Department of Chemistry.

What Feynman did in his lecture was to explore the possibility of advanced synthetic chemistry by direct manipulation of individual atoms and molecules. “So, why have we chosen these molecules for our work?” says Dr. Prasad as we speak in his o fice next to the newly constructed Chemistry department building. Let us consider dendrimers, for instance. These organic molecules closely resemble bio-molecules such as protein in shape, size and weight. Proteins are known to self-assemble and generate unique hierarchic nanoscale structures for performing various functions in the human body. Taking this important clue from proteins, one can in fact, in more or less similar fashion, generate higher order complex structures with useful functionality by properly customised aggregation of dendrimers. “But what useful functionality are we taking about?” was my follow-up question. Vivek, one of Dr. Prasad’s students, promptly replied - “What if I tell you that such appropriately designed big assemblies have self-healing ability and can assist in oil spill recovery too? This is just a couple of their myriad usages.” I was intrigued. In order to better understand the process of designing them, I stepped into Dr. Prasad’s laboratory and spoke with his PhD students Partha and Madhu. The individual molecules undergo the process of self-assembly to create bigger assemblies.

Dr. Edamana Prasad is an Associate Professor in the Department of Chemistry at IIT Madras. He worked at Photosciences and Photonics Laboratory, NIIST Thiruvananthapuram (CSIR, Govt. of India) and obtained his PhD (Chemistry) in 2000. His research interests include study of aggregation kinetics in dendrimers, finding the mechanism of supramolecular self-assembly in dendrimers, and determining the excited state dynamics in self-assembled systems. Dr. Prasad is also working as the Head of the Teaching Learning Centre (TLC), at IIT Madras.

| Chemistry


The protagonist of this story — dendrimers, which consist of ‘chemical shells’, organise themselves very beautifully in a symmetric pattern around the core in a spherical form. Each shell is made up of molecules which are functionlised to create a branched structure around the core. This structure is known as dendrimers. The number of branching events from the core to the periphery is known as ‘generations’. One can easily visualise them as a tree with many branches and sub-branches. The name has been derived from Dendron which happens to be the Greek work for ‘tree’.

As a direct consequence of fractal nature, their light emission properties show an unprecedented enhancement. Incidentally, branching in a tree reminds of fascinating objects called fractals – repeating never-ending patterns which appear self similar on various scales. So, do these dendrimers also show fractal nature? Yes, they do. Dr. Prasad with his former student Dr. Jasmine, reported that a popular class of dendrimers named PAMAM organise themselves in an aqueous medium and show fractal structures. The self-assembly of PAMAM is achieved by electrostatic forces. Fractal dimension is a statistical index to estimate the fractal nature of an object. It essentially quantifies the change in fractal pattern with the change of scale at which measurement is done and if this index is a non-integer (e.g., . ), the corresponding structure is a fractal. For PAMAM dendrimer assembly, this index was shown to be a little above . and therefore, the fractal nature was confirmed. As a direct consequence of fractal nature, their light emission properties show an unprecedented enhancement. Fractals are perhaps the most preferred way of generating captivating complexities in the natural world. It clearly works in the nano world too. One is then led to think “What holds them together or to be more precise, what kind of forces mediate this self assembly of small molecules?”

Partha reminds me of covalent bonds that we encounter in Chemistry . A covalent bold is a strong chemical bond which involves the sharing of electron pairs between atoms. But, for these big assemblies, we need the bonding to be of the non-covalent kind which essentially means weak interactions. These forces operate beyond molecular level and are responsible for spatial organisation of complex molecular architecture through self-assembly. Therefore, we name them – supramolecular forces. There are many kinds of them, one, for example is hydrogen bonding. Dendrimers and other such self assemblies may have multiple supramolecular interactions holding the individual parts together. Note that these interactions are weak in nature, therefore the system is amenable to tuning during synthesis. We need freedom to break and make the bonds easily to control the process. An important way in which some derivatives of dendrimers aggregate themselves is in the form of helical structures. Partha has recently explored the mechanism in minute detail. He has found that such self-assembled systems may exhibit well-defined alignment leading to chirality on the macroscopic level which means that the systems are not identical to their mirror images. This is the origin of the helical structure formed. This understanding of the mechanistic aspect is involved, as Partha says, but crucial to construct supramolecular systems as per the requirement. Now that we are armed with all of this information, it is time to design some gel systems which happen to be a natural consequence of the aforementioned self-assembly. We all have encountered gels in our daily lives. Butter, jam, shoe polish, hair gel, etc. are some of the gels we use regularly. A gel state is always easier to recognise than define, noted the British scientist Dorothy Jordan Lloyd in . Typically, a liquid system made of two or more components turns into a gel when one of the solute components forms a three dimensional crossed-linked network inside the bulk gas or liquid. Formation of such a solid network within the fluid restricts its flow resulting

| Immerse


Research interests in the Dendrimers laboratory, Department of Chemistry at IIT Madras.

in a jelly-like substance. Now, if the cross-linking of the network component of a gel is supramolecular in nature, we get supramolecular gels. These gels are formed, for example, as a result of self-assembly of dendrimers in organic and aqueous solvent. In recent years, Dr. Prasad has been actively pursuing the study of the formation and properties of these ‘physical’ gels. “The two kinds of supramolecular gels that we use in our study are - hydrogels and organogels”, informs Dr. Prasad. As the names suggest, hydrogels have water as their solvent while organogels are formed when an organic solvent (which has carbon) is a component of the gel. Dr. Rajamalli, a former research scholar in Dr. Prasad’s lab has investigated both the gels in a series of publications. Organogels have stimuli responsive character, which means that an external or internal physical stimulus can prompt such gels to tune their properties. Dr. Rajamalli was able to design and synthesise an ‘instant’ organogel based on a class of dendrons with specific kind of linkages. The gel may be used detect the presence of fluoride ions which has an

important role in biological systems. When the gel comes into contact even with small concentration of fluoride ions, a gel-solution transition occurs which changes the colour of the solution from deep yellow to bright red and hence the presence can be detected by ‘naked eye’. A similar gel formation which was induced by metals, was also prepared to detect lead ions as reported last year by Dr. Prasad with his post-doctoral scholar Vidhya Lakshmi.

“The two kinds of supramolecular gels that we use in our study are - hydrogels and organogels”, informs Dr. Prasad. More recently, Madhu has prepared an interesting three-component organogel. This organogel consists of one-dimensional nano fibres. One of the components possesses cholesterol which is a biocompatible molecule and is, in fact, known for its ability to form one-dimensional structures and gels. Cholesterol e fectively guides the process of supramolecular structure formation by stabilising various hydrogen bonds and regulating positive

| Chemistry


and negative charge transfer in the system. These charges originate from the other chemical components of the system. Therefore, in the presence of an applied potential, the system exhibits electric conductivity. Such a system has been synthesised for the very first time here in this lab. Dr. Prasad’s student, Sitakant continues this work to understand the mechanism of conduction in detail. Vidhya Laksmi and Madhu have also synthesised another promising organogel which can assist in recovery of spilled oil from water. In marine areas, accidental oil spill is a major concern as it has detrimental e fects on the surrounding ecosystem. Our friends in the lab observed that when a customised dendron-based solution or geletor comes into contact with oil on sea water surface, a robust gel system is formed by readily absorbing the oil floating atop. This gel is hydrophobic (water-hating) in nature and attains a wafer-like form almost instantaneously and floats on the water surface. These wafers of the gel can then removed, manually or mechanically. Oil is easily retrieved by heating the wafer gels. Surprisingly enough, geletor remains intact and can be re-used up to five or six times with reasonable success. It is a highly e ficient process of oil spill recovery.

This gel is hydrophobic in nature and attains a wafer-like form almost instantaneously and floats on the water surface. Such a dendron based geletor has been synthesised for the first time in this lab. In addition, the geletor is useful for its anti-wetting and self-cleaning properties and in the formation of invisible ink. It is indeed fascinating to note what hatred for water can do. But as we shall see, love for water could also be equally rewarding. Hydrogels consist of hydrophilic (water-loving) structures in them. These three-dimensional structures are cross-linked enabling the gel system to hold large amounts of water.

Hydrogels are so t, flexible and resemble living tissues. Here, in the Dendrimers laboratory, these hydrogels have been studied for their magnificent luminescence properties by Dr. Rajamalli, Supriya, and Sadeepan. Dr. Prasad’s research student, Prashant is attempting to use a hybrid hydrogel medium to enhance the photoluminescence properties of lanthanide ions by a pheonomenenon called resonance energy transfer (RET). Versatile and stable light emission are highly desired for optoelectronic applications and in cellular bioimaging. However, as it happens, these are not the only compelling features of hydrogels. Dr. Prasad’s student, Vivek explains that hydrogels have this remarkable ability of self-healing which means two or more separate fragments of the gel can stick together spontaneously as fresh bonds are created in the process. This is something like the flour dough we make to cook chapati. Two pieces of dough easily stick together if kept close enough to create a larger piece. Hydrogels are so t, flexible and closely resemble living tissues and, therefore, have several biomedical applications. These gels are currently used in reconstructive tissue engineering, wound dressing, contact lenses, etc. Vivek has recently synthesised a novel three-component hydrogel with some commonly available chemicals. Many of the known hydrogels have this self-healing ability only in acidic medium which limits their usage. The hydrogel synthesised by Vivek maintains its self-healing property even in a medium which is neither acidic or alkaline, such as water. Perhaps the most important use of this hydrogel is in the purification of water. When heavy metals ions (which are toxic in nature) and organic residues present in water come into contact with the gel, they get collected on the gel’s surface. The gel can then be easily removed, leaving pure water behind. The gel also has robust mechanical strength and high swelling capacity which are two highly sought-a ter features of hydrogels from the point of view of applications. These supramolecular big assemblies that we have seen so far are truly striking. A significant

| Immerse


Dr. Edamana Prasad working with his students in the newly established Laser Flash Photolysis Laboratory at IIT Madras. The inset shows the laser beam at 532 nm.

application of supramolecular hydrogels is in drug delivery - an increasingly popular method of targeted administration of medicine in the body. It is one of the on-going works in the Dendrimers lab and Dr. Prasad’s student Ramya walks me through the details. “We are working on a control release system�, she apprises me. A popular alternative approach of drug delivery is by using macromolecules such as higher generation dendrimers. In these dendrimers, as we have seen earlier, a series of chemical shells are attached on many levels. Such an arrangement gives it a spherically symmetric three-dimensional shape, much like a flower. The structure is highly porous and consists of many empty pockets in which a drug is loaded. The drug get released slowly as and when required. But, there are some serious issues that need to be resolved. Higher generation dendrimers are di ficult to synthesise in a laboratory. In addition to that, some of these dendrimers are cytotoxic in nature, i.e., they can kill the living cells. Ramya is attempting to resolve these by using low generation dendrimers or dendrons. The structures of such dendrons are flat fibers and not a three-dimensional morphology.

Therefore, to hold the drug, she uses a gel system created by an intelligent combination of water and an organic solvent. The gel thus formed is the same hybrid hydrogel which we saw earlier. In the presence of the appropriate solvent and water, the dendrons self assembles in the form of long fibres, entangled with each other. The drug is loaded with the solvent in the gelation process itself and resides inside those fibres. The initial results do indicate that it is an e fective control mechanism for the di fusion of drugs. Ramya is now beginning to study the biological aspects of her findings, in collaboration with Dr. Vignesh Muthuvijayan from the Department of Biotechnology, IITM. She is hopeful that the stability of this system and its nature would be favourable for drug delivery applications.

The initial results do indicate that it is an e fective control mechanism for the di fusion of drugs. Interestingly, there exists another kind of dendrimers called Janus type dendrimers named a ter the Roman God with two faces, one looking towards the future and one at the past. Janus

| Chemistry


dendrimers have an amphiphilic nature, i.e., they consist of both water-loving and water-hating parts. Dr. Prasad’s student Prabakaran is interested in synthesising them and in studying their self-assembly properties. These dendrimers tend to form vesicle in a mixture of solvents, can form hydrogel, thermo-reversible organogels and also exhibit liquid crystal behaviour. No wonder that these gels find potential applications in the fields of drug delivery, gene delivery and sensors. Dendritic structures can be good host systems for metal nano particles and quantum dots. Quite interestingly, Dr. Prasad with his student, Tufan Gosh, has recently shown that some graphene quantum dots (GQDs) immersed in aqueous solutions under certain conditions can be stable even in the absence of dendritic support and emit bright and pure white light. These GQDs are zero dimensional and have tuneable luminescence properties. On investigation, it was found that it is an assembly of those GQDs that forms in the solutions which generates this white light emission when suitably excited. The process of aggregation of GQDs is similar to that of dendrimers. The work has convincingly demonstrated that pure white light emission can obtained by a well-designed nanoscopic assembly of a single material, GQDs in this case. Dr. Prasad’s research students, Kaviya and Lasitha, in similar manner, use metal nano particles and nano-scale assemblies as sensors and catalysts. In future, Dr. Prasad’s laboratory envisages to create more fundamental and applied research based on molecular assemblies. One of the major developments in this direction was the recent establishment of an ultrafast kinetic study facility (Laser Flash Photolysis) with the help of funding

from the Department of Science and Technology for a group project. This is the first of its kind facility at IIT Madras. The experimental set-up can be utilised to analyse electron transfer kinetics which mostly occur from the electronically excited state of the molecules at nano-second time scale. Dr. Prasad and his group are now heading to analyse the electron transfer kinetics in molecular assemblies such as gels, which is an unexplored area in the frontier research. Therefore, we discovered that big assemblies with incredible features can be achieved by intelligent design of small molecules.

The gels

that we encountered find major applications in the development of smart materials. These are novel materials which tune their properties under the influence of external stimuli such as temperature, pressure, electric field, nature of the ambience etc. They are immensely useful in fabrication of sensors. The self-healing ability provides the material long lasting durability. On the other hand, biomedical applications of these gels are only limited by our imagination. From drug and gene delivery to tissue engineering and biosensors, the list is long and rapidly expanding. Water purification and oil spill recovery may be categorised as their non-trivial fields of usage. They also pave the way for new generation of optoelectronics.

The development

of organic light emitting devices, light harvesting systems, photovoltaic cells, etc. can greatly benefit from their versatile light emitting properties. There is no doubt that these supramolecular big assemblies are playing an important and decisive role in shaping future – the great future that Feynman talked about in his lecture more than five decades ago. ⌅

Meet the Author Swetamber Das is a PhD student in the Department of Physics at IIT Madras. He is involved in various activities for popularisation of science. Working on this article exposed him to the fascinating world of Chemistry. He feels grateful to Immerse and Dr. Edamana Prasad for it. He is also exploring his newly found interest in the history and culture of the Indian subcontinent. He is an Assistant Editor of Scholarpedia. For comment or criticism, he can be reached at swetdas@gmail.com.

| Immerse



T

he singer swings his arms around as he delves into a raga alapana. The violinist listens carefully and plays the appropriate following phrases. The percussion artists join once the composition starts, and the audience begins to keep track of the tala enthusiastically. Applause occurs sometimes in the middle of a piece, when a particularly telling svara phrase is sung or an interesting mrudangam pattern played. As the three hour concert comes to a close and the mangalam is sung, the curtains come down and listeners leave, content and filled with music. Music is an art form, a source of entertainment, a means of communication, a way to celebrate, a method of therapy, a source of joy. So what do computers have to do with music? Can a machine recognise ragas? Can Indian music be given a notation? Can a computer transliterate mrudangam beats? Can it separate a concert out into di ferent songs? Can it identify why certain songs make us cheerful, and others make us melancholic?

The first challenge faced by the team was the concept of a svara in Indian classical music.

traditional forms of music that have not been documented the way some other systems of music, such as western classical, have. One of the primary goals of the project is to showcase these systems of music to the world. Prof. Xavier Serra, coordinator of the project, a researcher in the field of sound and music computing, and a musician himself, first heard Carnatic music when he was an expert speaker at the “Winter School on Speech and Audio Processing” in on “Audio Content Analysis and Retrieval”. In his own words, he had not heard anything like this before. He convinced Prof. Murthy to join the CompMusic project, overcoming her initial reservation that dissecting and analysing music would lead to losing the pleasure of simply listening to it. Prof. Murthy immediately saw that it was important for a musician to be part of the project, and TM Krishna, a popular Carnatic vocal musician, agreed to be a collaborator. In addition, a student of his, vocalist and engineer Vignesh Ishwar also joined the project o ficially.

Tonic Determination

These are some of the questions that are being explored by Prof. Hema Murthy and her students in a project that is part of CompMusic, a worldwide Music Information Retrieval Project that is examining various traditional forms of music. The music genres covered by the project are Carnatic (South India), Hindustani (North India), Turkish-makam (Turkey), Arab-Andalusian (Maghreb, Northwest Africa), and Beijing Opera (China). The project deliberately focuses on certain

The first challenge faced by the team was the concept of a svara in Indian classical music. Although loosely translated as a musical note, a svara is not so much a note of a single fixed frequency as a range of sounds hovering around a certain frequency. One may vocalise a particular svara, but really be singing a combination of them. When say, one sings the svara ‘ma’ in the raga Sankarabharanam, ‘ma’ is the only svara pronounced, but the svaras ‘ga’ and ‘pa’ are also touched upon. In western music, pieces are composed to be performed in a prescribed fixed scale.

Prof. Hema A Murthy is a Professor in the Department of Computer Science. She obtained her PhD from the Department of Computer Science and Engineering at IIT Madras in 1992. Her areas of research are speech processing, speech synthesis and recognition, network traffic analysis and modelling, music information retrieval, music processing, time series modelling, and pattern recognition.

| Immerse


Group Delay Function The group delay function depends on the frequency of a signal and is the negative derivative of the phase of the Fourier transform of the signal. The deviation of the group delay function from a fixed number is a measure of the non-linearity of the phase as a function of frequency. The peaks in the group delay function are inversely proportionate to the bandwidth of the group delay function and a peak signifies less inflection.

However, in Carnatic music the tonic is the reference note established by the lead performer, relative to which other notes in a melody are structured. Musicians generally perform across three octaves, the lower, middle and upper octaves. An octave consists of seven svaras, ‘sa ri ga ma pa dha ni’. The tonic is the note referred to as ‘sa’ in the middle octave range of the performer. In order to identify a raga, the first requirement is to determine the tonic, because it gives a frame of reference. The same phrase of svaras may be identified as completely di ferent ragas if the frame of reference is di ferent! The basic unit of measurement of musical intervals is a cent, which is a logarithmic measure. An octave is divided into cents spanning semitones of cents each. However, based on the tonic, the range of frequencies in an octave changes. For example, if the tonic is Hz, then the higher octave ‘sa’ is at Hz, and the range of frequencies in that octave is Hz, and the Hz are divided into cents. But for someone whose tonic is Hz, the range of frequencies in an octave is Hz, and those Hz are divided into cents. Yet both these octaves are heard the same way by a listener. So tonic normalisation needs to be done. That is, the pitch histogram in the Hz scale needs to be converted to a pitch histogram in the cent scale.

Group delay synthesised histogram

How does a listener determine what the tonic is? Usually one can immediately determine which note is the ‘sa’ (even if the note itself is not articulated and only the lyrics of a song are sung). This is because the svaras ‘sa’ and ‘pa’ are ‘prakruti svaras’, or fixed svaras, which are sung in a plain way compared to the other svaras. That is, the bandwidths of frequencies for these notes are sharper. Hence Prof. Murthy and the team used signal processing and machine learning techniques to find out which are the sharper notes. The pitch histogram of the music is processed using what is known as a group delay function. The group delay technique emphasises the svaras ‘sa’ and ‘pa’ and this gives the tonic. With this method, the group was able to achieve about percent accuracy in identifying the tonic. To further fine-tune these methods, they segmented the sound of the tambura, an instrument that provides pitch to the musician. They determined the tonic from the tambura alone, and this helped increase the accuracy of tonic identification to around percent.

Each raga has certain unique typical motifs that can be viewed as time frequency trajectories. Raga Verification The next problem addressed was that of melodic processing. Can a computer listen to a song in a particular raga and determine what that raga is? First o f, what is a raga? Loosely, it a collection of svaras with some rules as to what combinations they can be sung in. But a raga is not merely the notes that make the scale. The phrases (sequences of notes) that are intrinsic to the raga, the various gamakas (ornamentations) employed, the silences

| Computer Science


between the notes, the inflections - all of these make up the aesthetics of a raga. How does a listener identify a raga? Somehow, within a few seconds of a musician singing a raga, a reasonably musically literate listener is able to identify it. As the group realised, each raga has certain unique typical motifs or signature phrases. A typical motif is a phrase that a particular raga is replete with, and that does not figure in any other raga. A motif can be quantified by pitch contours and viewed as a time frequency trajectory. The group realised that most of these motifs come from compositions set to tune in the raga, typically those composed by the ‘musical trinity’ – Shyama Sastri, Thyagaraja and Muthuswamy Dikshitar. In particular, they realised that it is the pallavi or the first segment of a song that typically contains the richest phrases of the raga. As an analogue, it is in the initial few phrases of an alapana (a particular form of melodic improvisation typically performed before a composition) that the identity of the raga that is to be performed is established. Prof. Murthy likens this to an artist drawing an outline of a landscape, or a portrait, before filling in the details, or a computational mathematician, who can gauge the behaviour of a matrix by looking at its first few eigenvalues.

A mrudangam syllable has an onset, attack and decay. The first task was thus to build a database of typical motifs of commonly performed ragas. The team used an algorithm called the longest common subsequence and a variant of it called the rough longest common subsequence to identify those phrases that are frequently repeated in compositions. When we hear a raga, we first identify it with a smaller group of ragas, its cohorts. The team set about the task of defining the cohorts for a number of commonly performed ragas. They eventually realised that what really happens when one listens

to a piece of music is raga verification rather than raga identification. When we hear a raga, we do not actually compare it with every one of the hundreds of ragas we might know. First we think, ‘hey, it sounds like this song’. We identify it with some smaller subset – its cohorts – and then verify which of the ragas it is in the smaller group. The machine is trained to do the same - the given time-frequency trajectory is first identified with a small set of cohort ragas, each of which has some typical motifs derived from compositions and alapanas. These typical motifs are fed as queries and if they occur in the input raga, the raga is thus verified.

Time Frequency trajectories of Kalyani and Shankarabharanam

The ragas Sankarabharanam and Kalyani di fer by a single svara. But anyone with a little knowledge of Carnatic music can tell them apart. Interestingly, comparisons of their time frequency trajectories also show how unlike each other they are. This is why the computer has to be trained to recognise raga motifs as time frequency trajectories rather than as mere notes. Carnatic music is primarily an oral tradition and notations only provide a rough framework.

Percussion The next task that the team worked on involved percussion. Percussion is a complex part of Carnatic music. The raga is the melody, while the rhythm aspects are the laya and tala. Specifically, the tala is the rhythmic structure in which a composition is set. The mrudangam is the primary percussion

| Immerse


instrument used in Carnatic music, while other instruments such as the kanjira, ghatam and morsing are also used. The mrudangam playing can vary depending on the lead artist, the mrudangam artist himself, the song, the emotion conveyed by the music and so on. The silences in between strokes are as important as the beats themselves. Sometimes the playing is deliberately slightly o f beat. This is called ’syncopation’. Improvisation is a very important part of Carnatic music and musicians usually meet for the first time on stage with no prior rehearsals. One of the purposes of analysing percussion is to put markers on a composition, to determine which part of the tala the song starts in, ends in and so on. TM Krishna pointed out that ‘moharras’ (certain predetermined patterns played) are more or less fixed. But first, the beats of the mrudangam have to be transcribed in some way. Prof. Murthy was actually approached by Padma Vibhushan Sangeeta Kalanidhi Dr. Umayalapuram Sivaraman, a renowned senior mrudangam artist, who wanted the beats of his mrudangam to be displayed on a screen when he played. Every beat played on the mrudangam can be articulated orally as syllables, say for example, ‘ta ka dhi mi’. This process of the vocal articulation of percussion syllables, or ‘sollus’ as they are known, is also called ‘konakkol’.

the sound of the mrudangam involves both pitch and volume. The strokes played on the right hand side of the mrudangam are pitch strokes, while those played on the le t side are not. There is still however a certain coupling between them. In order to analyse the strokes and syllables of the mrudangam, Prof. Murthy relied on her work in speech recognition. Just as in speech, each syllable has a vowel with consonants on either side. In a similar way, a mrudangam syllable has an onset, attack and decay. Onset is the beginning of the syllable, which reaches its crescendo in the attack, and then it begins to decay. They used what is known as a hidden Markov model, in which attack, onset and decay are the three states. Transitions between these can also be made states in this model. Using this model, they went about the process of classifying strokes. The team devised features to process strokes of the mrudangam, kanjira and some other percussion instruments.

An important goal of music information retrieval is music archival. A ter transcribing the strokes played by an artist, the machine uses the Markov model to locate the moharra. The moharra in turn gives the number of aksharas or beats in the tala and so the cycle length is determined. The team is now working on how to find the point in the tala where the song begins.

Music Archival

Stroke occurrences for instruments in different pitches

The mrudangam stroke is viewed as an FM-AM (frequency and amplitude modulation) signal since

An important goal of music information retrieval is music archival. In this respect, one task handled by the group was concert segmentation, a major part of music archival. Most available recordings of Carnatic music are continuous but we o ten need to listen to only one particular song or a raga. This requires that the concert be segmented into di ferent segments. Prof. Murthy initially suggested that they segment concerts into di ferent songs using applause. How can applause be detected? Mapped as a time vs amplitude graph, it has the shape of an eye. A few in the audience start clapping, it reaches a crescendo and then

| Computer Science


again becomes subdued. The team developed some descriptors to detect applause, and some criteria were fixed for the duration of an applause.

...the team was able to quantify applause. The strength of applauses indicates the highlights of a concert. However, the assumption made here is that applauses occur only at the end of a certain piece in a concert. But this is simply not true in a Carnatic concert! For example, they found that they had a concert recording with, say, items, which was segmented into, say, parts if applause were used as a means of segmentation. This is because Carnatic music has certain kinds of improvisation that take place at various points during a concert, and the audience may spontaneously clap if they find a particular part of the alapana or svaraprasthara (an improvisation technique in which svaras are sung extempore to a particular line of the composition) appealing. Now, in every complete item of the concert, at some point or another, the vocal, violin and percussion all occur. Moreover, they all appear together as an ensemble at some point. Also, a composition occurs in each item, and an item always ends with at least a small segment of the composition. The machine is thus trained to look for those places where the vocal, violin and percussion are present together. It then goes backwards and forwards in order to identify the complete item and segment it out. How is this ‘merging’ done from the right and le t? Changes in the raga can be detected and this helps determine when the performer has moved onto the next composition. The group was able to successfully apply this technique to segment about continuous recordings of concerts into separate items. Web discussion groups such as rasikas.org that carry reviews and lists of items performed in concerts made it possible to match the song names to the segmented items.

highlights of a concert. This algorithm can be used to pick out the best or most popular parts of a concert. The group has also been able to do some work on singer identification. Drawing on experience from Prof. Murthy’s work in speaker recognition, they have been able to work on identifying musicians by the timbre of their voices. This is possible by actually training a machine learning algorithm to learn voice characteristics of di ferent singers. This is also important in archival and concert categorisation.

Applause detection

Music Applications The CompMusic team has contributed metadata for items from Carnatic music concerts and albums to MusicBrainz, an open online music encyclopedia and database which aims to serve as a repository of metadata for music from across the world. The group is also considering working on an application like SoundHound, in which one can query a song by humming a part of it. The CompMusic project has developed a web browser called Dunya which is freely available for use. The work that has been done in di ferent genres by di ferent groups can be tested on this browser. For example, one could think of applications like:

In addition, the team was able to quantify applause. The strength of applauses indicates the | Immerse

. One can sing a snippet and the browser determines the tonic and range of frequencies, . If one is listening to a piece of music and reproducing it, the browser tells how accurately it has been copied – it can tell


Dr. Hema Murthy and her students in the Department of Computer Science and Engineering at IIT Madras

whether what has been reproduced is in the right ‘sruti’ (pitch), . One can feed it a piece of music and it can change the pitch and play the music alone (without the lyrics) at a di ferent pitch. All these applications are likely to be of great use to students of music. The browser is maintained by the core CompMusic team. Each group develops the algorithms and once they are robust, they are integrated into the tool kit on the browser. Initially, Prof. Murthy did not expect to find enough students interested in working on Carnatic music. To her surprise, she found that a large number of them – many with no background in Carnatic music at all – were enthusiastic about the project. Shrey Dutta, for instance, went on to learn to play the veena, began to listen to Carnatic music, and did much of the motif recognition work. He says now that he only needs to look at the time frequency trajectory of a raga to identify it, not even requiring the analysis that the machine does! Jom Kuriakose also came to Prof. Murthy with little background in Carnatic music. He was, however, fascinated with percussion, and now works directly

with Umayalapuram Sivaraman on onset detection. Prof. Murthy is very happy that students have been incredibly open about working in what is considered a niche and old fashioned genre of music.

All these applications are likely to be of great use to students of music. The browser is maintained by the core CompMusic team. Prof. Murthy stresses several times that any kind of machine learning should be implemented only with proper context. A proper knowledge base should be the frame of reference for any machine learning techniques. Machine learning involves big data, and as one pumps more information in, it learns on the average. Signal processing gives results in the particular, but can make errors. Combining the two makes for an ideal recipe for music information retrieval techniques. Far from taking away the joy of listening to music, Prof. Murthy says analysing Carnatic music has brought light to many things she had not realised earlier. A raga can sound drastically di ferent when

| Computer Science


interpreted by di ferent musicians. Moreover, their structures have changed over the centuries, and only keep evolving with time. Why do some things work in music, and others fail? Why can one sing a particular gamaka in a particular raga but not in another? Several things are intuitive about music. Can a machine understand these subtleties? Computers can be trained to play chess, to prove mathematical theorems, to diagnose diseases, to recognise ragas and so much more. Can they also

be trained to think intelligently and creatively? To give elegant proofs, and discern between good and mediocre music? If pointed in the right direction, can they even see things that humans may miss? This requires a deep understanding of human cognition and the human creative process. By working on various forms of music, and more generally in the understanding of art and creativity, CompMusic is a significant contribution to the vast ocean of artificial intelligence.⌅

All the images have been taken from Prof. Murthy’s research papers.

Meet the Author Arundhathi Krishnan is a PhD student in the Department of Mathematics and a Carnatic musician. Her area of research is Functional Analysis and Operator Theory. She can be reached at arundhathi.krishnan@gmail.com

| Immerse



C

entre for Innovation (CFI), IIT Madras calls itself a place where a student can ‘walk in with an idea, and walk out with a product’. Known within the institute as a place where students slog from midnight until daybreak, it encourages students to pursue innovative ideas and supports committed and ambitious student teams to participate in prestigious international competitions. Team Amogh is one such team that has made IIT Madras’ first Autonomous Underwater Vehicle (AUV). This vehicle, codenamed AUV Amogh, is capable of performing a set of predefined tasks on its own without human intervention. But why work on an AUV when there are many other interesting avenues needing attention? Ocean bodies cover over per cent of the earth’s surface, but they remain unexplored to a large extent. Although industries making Remotely Operated Vehicles (ROVs) have taken up many underwater missions, there is still a need to develop AUVs,

especially because there are places deep below in water where e fective communication is an issue. Also, the research and development pertaining to underwater projects is progressing at a much slower pace as compared to land-based and air-based projects. Determined to push the envelope of underwater technology for exploration, the team decided to develop a fully autonomous underwater vehicle. In their endeavour, the team has also proved their mettle in several competitions. They had their first taste of success in when they participated in the national competition – Student Autonomous Underwater Vehicle (SAVe) – organised by the National Institute of Ocean Technology. Outperforming all the other teams in every aspect, the team secured the first position. This success spurred them to participate in an international competition called RoboSub, conducted by the Association for Unmanned Vehicle Systems International (AUVSI). Competing against several teams, they emerged as strong contenders on an international level.

Side View of AUV Amogh.

| Immerse


Apart from these two competitions, they also participated in various innovation challenges and the vehicle has also been selected as one of the top student innovations across the nation. As it turns out, like most other success stories, Amogh too had humble beginnings. The team had initially set out to build nothing more than an ROV which was stable and manoeuvrable. In this phase, they designed the frame and the hulls, decided on the material to be used, analysed the structure, and came up with a waterproofing mechanism. While passing their prototype through a series of tests and upgrades, they faced numerous challenges, the biggest one being trying to make the vehicle waterproof. A ter a detailed examination of various options, they chose water-tight PVC pipes. These pipes were fixed tightly to both fore and a t ends of the hull, and then sealed using an epoxy substance.

The Team’s ROV.

Improving their prototype at every stage, they eventually built a self-powered ROV. The vehicle had lithium polymer (LiPo) batteries on board to power the thrusters. These batteries were controlled using a terminal connected to a wireless router, which was in turn connected to the vehicle through an ethernet cable.

Designing the AUV Needless to say, the mechanical team was entrusted with the indispensable task of designing and manufacturing the vehicle. More precisely, they had to design the following parts: pressure hulls, frame, and camera enclosure. Since pressure hulls provide a watertight enclosure for the vehicle’s electronics, their design could not be overlooked. A configuration of two cylindrical hulls was chosen because it helped reduce the resistance of the vehicle and provided enough room for the electronics to be mounted on. The bottom hull was chosen to be heavy in order to counter the buoyant forces that would push the vehicle out of water. In order to achieve a higher speed per unit of power input, the hull was fitted with a nose having an ellipsoidal shape as it o fered the least drag on the structure. Furthermore, simulations helped determine the thickness that would enable the hull to withstand the pressure of water at a depth of metres. Being the backbone of any underwater vehicle, waterproofing was done meticulously. A customised cap, with grooves to accommodate two rubber o-rings, was permanently attached to the a t end of the top hull. The cap was further covered with a flat disc, which consisted of co-axial holes (not visible in the figure) to mechanically squeeze the o-rings and ensure watertightness. In order to prevent any chance of water entering the hull, the gap between the cap and the flat disc was sealed using silicon grease.

The team’s success in the ROV phase further fuelled their ambition to build an AUV. However, this was by no means a mere continuation of their task until then. Each aspect of the AUV design demanded expertise in a specific area. Three sub-teams were formed to tackle this – the mechanical, electrical, and the so tware teams. Although these teams were carved out of the original team, they needed to function in coordination with each other. | Centre For Innovation

The Top Hull.


At this juncture, it is worth noting that an AUV is also required to perform certain slick maneuvering tasks. For this purpose, the vehicle was equipped with thrusters to achieve control in degrees of freedom. Two thrusters placed on either side of the frame facilitate surge (forward/backward motion) and yaw (tilting in its own plane) control. Two thrusters – fore and a t – positioned axially upwards provide heave control.

The AUV’s Frame

Powering the Motion Let us now move on to the team without which the vehicle would be rendered powerless – the electrical team. They were responsible for power management, circuit design, and mission control. The electrical module consists of a Central Processing Unit (CPU), a micro-controller, power supply units, sensors, thrusters, and other essential peripherals.

One of the seemingly insurmountable goals the teams set for themselves was to eschew ready-made components and design their own components instead. The motherboard or the CPU, just as the one in your desktop, does the main job of image processing and mission controlling, and provides a platform for all the components of the vehicle to communicate with each other. The micro-controller controls the

motion of the vehicle by changing the rotation speed of the thrusters, on receiving commands from the CPU. The primary control board, an interface for various sensors used in the vehicle, initially had all the components soldered onto it by fitting the wire leads of the components into the holes on the board – referred to as through-hole technology in electrical hardware parlance. They re-designed the circuit using Surface Mount Technology (SMT), in which the components are mounted directly onto the control board. They were indigenously designed, except the surface of the board. This significantly reduced the size of the board because by having smaller or no leads, SMT components were smaller than their through-hole counterparts. The bridge between the micro-controller and the thruster is the motor driver. It’s a circuit that draws power from the batteries, and drives the motor at the speed demanded by the micro-controller. By now, it is natural for the reader to assume that all such high-tech components, such as the micro-controller or the motor driver, were purchased from electronics stores in the market. However, this is not true. One of the seemingly insurmountable goals the teams set for themselves was to eschew ready-made components and design their own components instead. They intended to pursue this slowly, replacing the ready-made circuits with the ones they designed. This helped them build components tailored to their needs. Among the most essential of these components were the thrusters which consumed about per cent of the total power. As a result, they demanded high capacity batteries to run the vehicle. Therefore, four lithium polymer batteries which together lasted for a minimum of minutes were chosen for the entire vehicle. The higher voltage batteries powered the thrusters, and the remaining low voltage ones supported all the other peripherals. However, this isn’t all that there is to an AUV even when looking at it solely from an

| Immerse


A Labelled Diagram of AUV Amogh

electrical engineering viewpoint. For anything to be autonomous, sensors are essential. Amogh uses sensors – pressure sensor, inertial measurement unit (IMU), current sensor, voltage sensor and leak detection sensor – and a pair of cameras. The pressure sensor is used to determine the depth of the vehicle below sea level. The IMU measures the orientation of the vehicle in degrees. Current sensors were used to measure the current flowing through each device since a high surge in current might permanently damage the device. Since an excessive discharge of lithium polymer batteries leads to catastrophic failures, voltage sensors were incorporated to regularly monitor the voltage across the batteries. In order to prevent any damage due to water leaking into the hull, a circuit was built to identify the intrusion of water. Two circular probes were mounted near the end cap of the hull. Since water conducts electricity even with slight impurity, the voltage across these probes gets amplified in its presence. This voltage signal is sent to the microcontroller to trigger a shutdown of the system.

Steering the Ship Last but far from least, the so tware team was responsible for image processing, mission controlling, and designing a simulator. The significance of their role can be best explained using the following example of a competition they participated in. As per the problem statement in the RoboSub competition, the vehicle was supposed to touch buoys. The vehicle was guided towards the buoy by a plank placed on the floor of the water body. The buoy had to be traced by the front camera before the AUV reached the end of the plank. Once the vehicle tapped the buoy, it would bounce back and traverse towards the other buoy, as before. In order to achieve this, the vehicle leveraged cameras, placed in the front and the bottom of the vehicle. However, there was a challenge: the images weren’t clear enough for a spotted object to be detected. They had to be corrected to remove the

| Centre For Innovation


blue tinge, a characteristic of underwater images, and brightened in order to improve visibility. Any image had to be preprocessed to ensure high chances of the corresponding object getting traced on the camera. How did they accomplish this? Note that a camera treats images as being composed of a large number of tiny coloured squares called pixels. Each pixel is a combination of three colours – red, green and blue. So if a bluish tinge has to be made negligible, the red and green channels can be boosted in intensity. In another such issue, water, because of its high refractive index, deviates light from its original path, leading to reduced visibility. The resulting dark images are corrected by a method called gamma correction. In this method, the RGB colour space is converted to another colour space called HSV (Hue, Saturation, Value). The dullness of the images can be

rectified by increasing the saturation of the image. This leads to bright objects becoming brighter and the dark ones becoming darker. Furthermore, the orientation of the plank with respect to the frame was determined to correct the path of the vehicle. As the vehicle got closer to the buoy, the area occupied by the buoy in the image got larger. Once it reached a certain threshold, the vehicle was programmed to move further, hit the buoy and come back to take another course. The mission-controlling part of the so tware determined the power needed to be given to each of the thrusters to move along a particular course. Because the LiPo batteries had limited endurance and needed a significant amount of time to get charged, the team also designed a simulator to solve the challenges of mission controlling.

Team Amogh with their AUV.

| Immerse


AUVs have a plethora of applications. They are used mainly in detecting leaks in oil pipelines deep in the ocean. They are even used in detecting corrosion in a ship’s hull, ballast tanks, piles of a dock, and oil tanks. A technique called non-destructive analysis, where theories pertaining to ultrasonic sound are used, can detect the thickness of the corrosive layer. These frequencies, in the order of a few MHz, penetrate the corroded layer before being reflected by the underlying metal. Team Amogh’s project is representative of how student teams work together in groups to

participate in competitions, taking charge of di ferent lines of work to perfect every single component involved in the design of the vehicle. The team now plans to upgrade its present design by using brushless thrusters, and slowly transform it into a modular design. To be at par with the present day technology, they have also decided to use acoustic sensors to determine the vehicle’s location precisely. A startup, named Planys Technologies, has also emerged out of the project. Currently incubated in IIT Madras, Planys plans to deliver customised autonomous vehicles specific to di ferent underwater applications. ⌅

All images are courtesy of Team Amogh, CFI

Meet the Author Rahul Vadaga is a 4th year Dual Degree (B.Tech.-M.Tech.) student in the Department of Electrical Engineering at IIT Madras. Fascinated by the idea of ‘building things on one’s own’, he joined the Centre for Innovation (CFI). After a year-long thrilling ride at CFI, he decided to write about one of its notable and successful endeavours. He feels grateful to Immerse and Team Amogh for presenting him with an opportunity to do so. Of late, he has been exploring the area of Artificial Intelligence in order to understand its immense possibilities for the future. For comments or criticism, he can be reached at rahul.vadaga@gmail.com

| Centre For Innovation



A

s the torrential rains lashed at the hill slopes of Uttarakhand, life came to a complete standstill in one of the biggest tourist centers of the country. Hundreds of lives were lost and property was damaged. Fear and panic spread as survivors unsuccessfully tried to contact their loved ones. In such situations, rescue and relief operations have been extremely di ficult. Due to lack of authentic information and communication breakdown, the days following a disaster have never been devoid of panic and confusion.

This system makes use of the existing technologies available to provide an immediate and temporary communication means. To address this vital aspect of disaster management, namely, the establishment of a post-disaster communication system, the Japanese government, in collaboration with various technical institutes in India and Japan, has setup the DISANET (Information Network for Natural disaster Mitigation and Recovery) program. The main aim of this program is to develop a complete model that covers the various aspects of disaster management which include monitoring and modelling of weather and seismic activity, developing a robust communication network and execution of e fective relief in a post-disaster situation. The entire program was split into four major divisions, each of which were taken up by specific research groups from India and Japan. Among

the four divisions, the development of sustainable communication architecture was undertaken by Prof. Devendra Jalihal and Prof. David Koilpillai from the Department of Electrical Engineering at IIT Madras along with Keio University, Japan. They have developed a very innovative and e fective means of communication that can function as good as a cell-phone network even when all the existing communication systems fall prey to a disaster. It has all the important features required for post-disaster communication: less time for installation, greater accessibility, e fective outreach and broadcast of authentic information. It does not require any custom built equipment which makes it readily available at any location within short time. Mobile phones are the most common means of communication and are heavily depended upon. Therefore this system makes use of the existing technologies available in mobile phones to provide an immediate and temporary communication means. The physical setup includes an LTE (Long Term Evolution, commonly known as G) transmitter or antenna and other related equipment that are housed in a hoisted helium balloon for ensuring a large coverage area. The coverage is enhanced by using an FM broadcast system, which can convey information to victims regarding relief supplies, precautions and other rescue operation details. This information is broadcasted by an authentic source such as the district collector, over a certain frequency that the victims can tune to and get informed. The low bit-rate digital data also called RDS can be used to broadcast centralised relief information in text format than can be read from the mobile.

Dr. Devendra Jalihal is a Professor in the Department of Electrical Engineering at IIT Madras. He recieved his B.Tech. from IIT Kharagpur in 1983 and then completed his Masters in Engineering at McMaster University at Hamilton, Canada. In 1994 he joined the Department of Electrical Engineering at IIT Madras. Prof. Jalihal enjoys teaching the fundamentals of Electrical Engineering such as signals and systems and communication theory in undergraduate courses. His research interests include Statistical Signal Processing, Detection and Estimation Theory and Digital Communication.

| Electrical Engineering


Generally, RDS is used by FM radio channels to display text such as the name of the song, channel and sometimes even the song lyrics on the screen of the audio setup. A similar text containing the helpline numbers, details of the whereabouts of relief materials, etc. can be broadcasted through RDS. Since FM-RDS is a feature available on a large number of modern GSM handsets, it o fers the greatest outreach to the victims.

in the basket of the helium balloon. LTE has a large

As mentioned above, FM broadcast is used to convey information from the authority at the control centre to the victims. Similarly the victims also can send text messages, images and even short videos giving details of their location and condition to a certain number over the network. To facilitate this, WiFi and LTE ( G) technologies are used. Since WiFi has a short range, multiple antennae are set up in the surrounding areas and the main antenna is placed

the basket of the helium balloon that functions as

range of

to

km and therefore eliminates the

need for any intermediate towers. Hence it saves time and is the best means of communication when there is no possibility of setting up towers. Both WiFi and LTE technologies provide high bit-rate and can be used for streaming videos and images. For the benefit of those having mobile phones without any WiFi or G technology, there is also a GSM setup in a temporary tower. This acts as any other ordinary communication tower to transmit voice messages and calling.

The details sent by the victims via

the above mentioned modes, are gathered by centre authorities and provided to rescue workers to carry out the relief operations. Also, rescue troops can use this communication network to be in constant touch with each other and the main control unit.

Circuitry inside the GSM Base Station Courtesy: Nithyanand Rao

| Immerse


The DISANET Team Credits: Prof. Devendra Jalihal

During any disaster, the main cause of communication breakdown is an increase in communication tra fic over the usual communication networks. “Due to people trying to contact their loved ones in the a fected areas, there is a large and sudden increase in the number of calls being made to a particular subscriber, leading to congestion and eventual breakdown”, says Prof Jalihal. Therefore the DISANET communication system has introduced the ‘I am Alive’ feature to address this problem. A victim in the a fected area sends a text message or an image to the call center which then updates his/her mobile number along with the message, date and time, on the internet as a searchable entity thus being accessible to everyone. In this way, the well-being of a victim is conveyed to a large number of people at a time, thus avoiding excess tra fic. During the floods in Uttarakhand,

the

casualties’ details and information were not available even a ter four or five days following the disaster.

The well-being of a victim is conveyed to a large number of people at a time, thus avoiding excess tra fic. This led to uncertainties in the whereabouts and well-being of victims. To overcome this limitation, the DISANET communication system makes use of the ‘person finder’ feature developed by Google. It uses various attributes of a person to confirm his identity. The rescue operators take images or videos of the victims and send them to the main operation center. This data is presented to the world in standard formats known as PFIF (Person Finder Information Format). It consists of victim details displayed on dashboards. Initially, only the

| Electrical Engineering


picture and a few details gathered by the rescue operators are available, but with time, people who know the victim can add and update other details thus making the information complete. This way data can be refined and augmented with time. This completes the entire framework of the DISANET communication system along with its features for providing e fective and robust communication during disasters. The system was successfully tested on a small scale at IIT Madras in July . Prof. Jalihal mentions that there

are ongoing talks with the Chennai Police and Railways for implementing some of the features of this technology with a few modifications in heavily crowded areas during festive seasons in the city. It cannot be emphasised enough that malfunctioning of communication systems during crisis situations amplify the di ficulties faced by the victims as well as the rescue troops. Hopefully, the establishment of such an e ficient communication network will reduce the confusion during such times and in the process make the rescue work easier and more e fective. ⌅

Meet the Author Tejdeep Reddy is a third year undergraduate student, pursuing his B.Tech. in Naval Architecture and Ocean Engineering at IIT Madras. He is also actively involved in the development of an Autonomous Underwater Vehicle at the Center for Innovation, IIT Madras. To know more about this remarkable vehicle, go to page 44 and start savouring its story!

| Immerse



A

disobedient mass of cells – loosely called a cancer, or tumour – sits in the midst of healthy tissue. Evading the body’s immune system, and drawing sustenance from the blood vessels that it manages to recruit around itself, the rogue mass continues to grow - as cells within it divide, and divide again. As it works hard at performing this deadly exercise, the tumour cannot help but warm up and give o f some heat. This heat radiates outwards and is ordinarily lost to surrounding spaces. Dr. Kavitha Arunachalam and her group at the Department of Engineering Design have been working on ways to detect this naturally-emitted heat reliably, using microwave radiometry. They also use externally-supplied heat to help destroy the growing mass, using an approach known as hyperthermia.

A cancer generally does not announce its arrival in a hurry, and it cannot be made to leave without a sacrifice of some of the body’s healthy tissue. When a microbe (a bacterium or a fungus) infects a human body, we manage to attack it with chemicals, such as antibiotics, which take advantage of the bacterium’s vulnerabilities; vulnerabilities that are not shared by our own cells. In contrast, a cancerous cell is like a healthy cell in almost every way, except that it somehow manages to divide uninhibitedly. A cancer generally does not announce its arrival in a hurry, and it cannot be made to leave

without a sacrifice of some of the body’s healthy tissue. In the specific context of breast cancer, which a fects more than a million women every year, another unfortunate fact needs to be faced. Today, the most reliable method of detecting breast cancer is X-ray mammography; a technique which involves sending high-energy radiation through breast tissue. X-rays have enough energy to damage DNA, and create mutations. X-rays, therefore, can actually potentially cause cancer even as they are used to detect its presence. Thus, there is a dire need for alternative imaging methods. Dr. Kavitha’s group works with low-energy, low-frequency, non-ionising electromagnetic radiation. The group works with microwaves (which have already given us tools such as radar, radio telescopes, GPS, mobile phones and microwave ovens). Microwave frequencies are lower than those of red and infrared light, and go down all the way to the frequency ranges of radio waves. Dr. Kavitha’s group believes that microwaves have potential for medical imaging and treatment which is only just beginning to be explored. Microwaves, unlike X-rays, cause no mutations in DNA. Also, the heat that a tumour generates includes a component of microwave radiation. This makes microwaves ideal for use in both treatment and imaging, firstly because one does not need to introduce any external radiation to perform imaging, and secondly because much less harm is done when one does need to send some microwave radiation into cancerous tissue, to destroy it.

Dr. Kavitha Arunachalam is an Assistant Professor in the Department of Engineering Design at IIT Madras. She works on microwave antenna design, non-destructive testing of materials, and development of instrumentation for biomedical applications. Dr. Arunachalam obtained her B.E. in Electronics and Communication Engineering from the College of Engineering, Guindy, Anna University, and her PhD from the non-destructive evaluation laboratory at Michigan State University. Following postdoctoral research at the hyperthermia research laboratory, Duke University Medical Centre, she joined IIT Madras in 2010.

| Immerse


(from left) Dr. Kavitha, Geetha, Rachana and Vidyalakshmi.

A microwave radiometer transmits nothing to the object that it images. It merely receives and measures the radiation generated by the object, making it completely safe for use with tissues. When used for cancer detection, the radiometer simply detects the heat that a tumour generates at microwave frequencies, and uses this information to find the tumour. The heat radiated by a tumour, and by healthy parts of the breast, carries information about temperatures. This allows a radiometer-based device to create a three-dimensional temperature map of the breast. Going inwards, as the temperature rises, one encounters a series of isothermal (equal temperature) contours centred around a hotspot – the cancerous mass. How well the device is able to locate a tumour depends on its ability to measure di ferences in temperature. The tumour’s size at this stage is roughly five to ten millimetres across – making it large enough to be resolved from surrounding tissue through the use of micrometre wavelengths. The temperature of an object tells us how energetically its atoms and molecules are moving around. Every object which is at any temperature

above absolute zero (at which atomic motion ceases) radiates some energy. The object might make up for what it loses this way by absorbing radiation that falls on it, in order to maintain a steady temperature. This is a fundamental consequence of the restlessness of the charged particles inside it; the energy of that motion is converted to the energy of the radiation emitted. The nature of the radiation emitted by such an object depends on its temperature. This is how astronomers estimate the temperatures of stars. In the spectrum of light received from a star, a particular frequency has the maximum representation in terms of energy. The higher this peak frequency, the higher the temperature of its source. So blue stars are hotter than red ones.

The heat radiated by a tumour, and by healthy parts of the breast, carries information about temperatures. If, instead of receiving and analysing the entire spectrum, we were to build a device focusing on a select group of frequencies – e.g., the microwave region of the spectrum – then the amount of energy that such a device would get from the radiating

| Engineering Design


object would be directly related to its temperature. Of course, this is an oversimplification and only roughly true; but it holds for low frequencies. Microwave is low enough in frequency for this to hold true, making it a reasonable choice for the narrow band of frequencies that one chooses to detect in the case of cancers. Another factor that determines this choice is how deep the technique allows us to look. The tumour emits heat at all frequencies, but as these waves make their way to the antenna at the surface of the breast, their energy gets absorbed by layers of muscle, fat and glandular tissue. It is possible to form a picture of breast tissue by measuring infrared emissions as well, but this picture would go less than a centimetre deep. This is because infrared emission, having much higher-energy, would die out much faster with distance. Microwaves, on the other hand, can bring information from as deep as three to four centimetres into the breast. As Dr. Kavitha explains, decent resolution and very good penetration are what make the microwave frequency range a good choice for such applications.

It is possible to form a picture of breast tissue by measuring infrared emissions as well, but this picture would go less than a centimetre deep. Note, however, that while these waves reach us with more of their initial energy intact than that of infrared, this says nothing about how much of that energy was there to begin with. Vidyalakshmi MR, the research scholar who designed and built the device circuitry, gives me an idea of just how weak the signal that they’re trying to catch is. As I enter the lab with her, she sits down with a sheet of paper and proceeds to explain with extreme e ficiency everything I can understand about how the device works, despite her dismay at my lack of knowledge of anything but the very basics of field theory. I watch in awe as she lists out all the mobile, bluetooth and WiFi signals that are always zipping

across everywhere around us, and shows me, in the middle of that chaos of frequency bands, the tiny signal that their device works with. It’s a signal, she tells me, as weak as what one would receive on the Earth’s surface from a satellite in orbit. A good quality call on a mobile network would generally use a signal about a hundred thousand times stronger than that, and such mobile signals would most likely be abundantly available to interfere with the detection device, anywhere that it might be used. The problem, then, is not only to sort out the frequencies but to detect and deal with such a weak signal in the first place. It is a signal in picowatts, a trillion times weaker than the milliwatt scale at which most power sensors operate. Measuring a broader range of frequencies, which would have increased the detected power, is made impossible by the flanking communication bands. What’s more, every measurement device has random internal variations, called noise, that are usually too small to make much di ference to the signal but can distort it beyond recognition if the signal itself is equally small. To be able to detect the power without adding any noise, while also making sure that it was a signal only from the cells and not from the environment, the lab had to meet the challenge of designing a very good front-end for the instrument. A front-end is a component of every communication system that directly takes the signal collected by the antenna, processes it and passes it on. The front-end normally starts with what is called a band pass filter, which passes on the required frequency and gets rid of the rest. Following this, the output from the filter is amplified so that later stages in the circuit can work with a better signal. So at first, Vidyalakshmi tried placing the filter and amplifiers in this configuration. It didn’t work; what the filter received from the antenna was too weak for it to work with. The front-end that she finally developed now has three stages of amplifiers to get the strength up to a decent level before it reaches the filtering stage. At the end of each stage, there are isolators that act like valves or one-way gates, so that nothing

| Immerse


happening in any part of the circuit feeds back to the stage before it, to a fect it. I ask her if she was apprehensive about choosing such a weak signal in such a crowded zone of the spectrum. She is surprised by the question; she did not ever doubt that it could be done. The device is now ready, complete with casing. With a heated water bath and a thermometer, she has been testing it to see how well it can measure temperatures. So far, the results have been rewarding. The tabletop radiometer, needless to say, is more portable than any X-ray system used for recording mammograms. It does not involve shielding, isotope handling, or specially built units, and is almost a hundred times cheaper to manufacture. It runs on two AA batteries.

Measured properties of PVAL solutions can be compared with known tissue behaviour Courtesy: Dr. Arunachalam

Along with the practical applications of microwave radiometry for imaging come a whole host of complexities. Rachana S Akki, also a research scholar in Dr. Kavitha’s group, is working to identify and model the factors that a fect the quality of the scan and develop protocols that will ensure its reliability. Some factors, she explains, are beyond our control – the size and depth of the tumour, for instance, and the balance of fat and glands in the breast tissue. Because the amounts of power normally emitted from di ferent patients’ breasts are di ferent, the device must be able to tell whether

a change in the measurement is because of the presence of a cancer, or merely due to diversity in the composition of the breast tissue. The elimination of subjectivity, I realise from my conversations with Rachana and Dr. Kavitha, is the ideal that medical imaging of all sorts constantly tries to attain. A scan that can be carried out any number of times, by di ferent people with di ferent skill levels, and still look the same, is a scan that can be trusted. An ultrasound probe, for instance, with all its constant movement, o fers no hope of obtaining such a scan reproducibly, even when the person operating it is skilled. In both X-ray mammography and radiometry, the chances of such disturbances are reduced by keeping the device steady and compressing the breast between two plates. This compression is necessary with X-ray so that large volumes of tissue do not absorb too much radiation. It also gives a better quality scan, since the healthy tissue is compressed more easily, bringing the tumour closer to the surface. But X-ray mammograms are painful, with the breast being compressed to half its size. Rachana emphasises that the microwave radiometer, on the other hand, can work with much, much lower levels of compression; about a quarter or even a fi th.

A scan that can be carried out any number of times, by di ferent people with di ferent skill levels, and still look the same, is a scan that can be trusted. To model the breast, Rachana prepared solutions of a chemical called polyvinyl alcohol (PVAL) in water, changing the amount she added each time, to get gels with di ferent properties. A ter making a wide range of gels, she studied their sti fness and other mechanical properties. Combining available data about how fatty, glandular and mixed breast tissues behave under compression with the results of her experiments on these PVAL ‘phantoms’, she was able to deduce which PVAL solution would mimic which kind of tissue. This allowed her to design and control

| Engineering Design


computer simulations of the breast, and to extract information, for instance, about the power emitted by hotspots in fatty or glandular breasts, when compressed to di ferent extents. One way in which this is relevant is that a more glandular composition leads to more discomfort under compression, and this has to be taken into consideration while deciding on imaging procedures. These studies also gave Vidyalakshmi an estimate of the power levels that the front-end must be designed to take, as input. Developing protocols for imaging by studying all these factors will someday allow the device to be made suitable for use in clinics in urban as well as rural areas where, unlike in the lab, the environment is not controlled. That is why it is important to carefully look into which factors influence the measurement, their relative significance, how much they may vary, and the corresponding e fects of such variations on the results. Researchers can then optimise the parameters that can be controlled, to give reliable results while minimising cost, pain and discomfort. “If a defined protocol is there, then the person who is handling the device will have a checklist,” says Dr. Kavitha. “That will give us greater confidence that a hotspot detected is from the tissue, and not from the influencing environment.”

As a supplement to chemotherapy, microwave hyperthermia acts by improving the delivery of the drug to tumour cells.

Looking into how microwave can be used along with these techniques could help increase the scan depth while reducing the risk of exposure to harmful radiation. It turned out that most of the pieces of equipment that I saw in Dr. Kavitha’s lab had been built in-house; these include, amongst other things, a large variety of antennae. The antenna that Rachana has made for the radiometer is a small circular, nearly flat one, like a stethoscope disc. An antenna can cover either a wide angle up to a short distance, or a long distance in a specific direction – and this applies to both microwave transmitting and microwave receiving antennae. In this case, the antenna needs to collect radiation from as large a region as possible, and the source of this radiation is not too far away. So a wide angle antenna is best to use. In other applications, directionality becomes important. For microwaves can be used to not only detect cancers but treat them as well, and in this e fort – with the location of the tumour becoming known – an irradiating antenna must transmit in a very specific direction. Another of the PhD students, Geetha Chakaravarthi, is working on one such application, and the lab shelves are dotted with antennae and other devices she has prepared to investigate the use of microwaves to kill cancerous cells. This technique, called hyperthermia, works by heating up cancerous cells by focussing microwave radiation upon them. Hyperthermia supplements chemotherapy and radiotherapy as well, by di fering mechanisms.

The microwave radiometer is intended to become an alternative screening tool, one that can avoid unnecessary exposure to ionising X-radiation during regular screenings. If a cancerous growth is suspected, however, X-ray mammography would still remain the golden standard. Apart from preliminary scans, the microwave radiometer could also be used for intermediate screenings to check a patient’s response to treatment – currently done using infrared thermograms – and for follow-ups that check for recurrence, which may currently tend to combine ultrasound and X-ray examinations. | Immerse

Prototype of the microwave radiometer


Applicator placement on a healthy volunteer during preclinical pilot study Courtesy: Dr. Arunachalam

Radiotherapy, where cancer cells are killed with X-rays, works better in those parts of a tissue which are rich in oxygen. Thus, tumour cells are more sensitive to radiotherapy if they are located near a major blood vessel. Microwave therapy, on the other hand, works better where the blood supply is poor, because the blood is not able to e fectively transport away the extra heat. It can thus reach areas of the tumour where radiotherapy fails. Together, radiotherapy and hyperthermia can defeat the tumour more e fectively.

...it is very important to get rid of air bubbles in the water if the microwave is to reach deep tumours. As a supplement to chemotherapy, microwave hyperthermia acts by improving the delivery of the drug to tumour cells. In chemotherapy, the blood carries a drug to destroy the tumour, the idea being that the higher metabolic activity of the cancer cells will lead them to take up more of the drug than healthy cells do. Heating the tumour with microwaves e fectively forces the body to raise circulation in the heated region in an e fort to regulate its temperature, and more circulation means better drug delivery. It is crucial that the heating be highly localised, so that there is as little damage as possible to healthy cells.

While it is known that microwave therapy can improve radiotherapy and chemotherapy results, e fective devices have not yet been developed for this. The ongoing e fort in Dr. Kavitha’s lab is to develop patient-friendly devices well-suited to treating both large and small tumours with very site-specific application of therapeutic microwaves. The ‘patch applicator’ which Geetha has developed is the smallest one currently available and its gently-concave surface, unlike the rigid ones of most of its predecessors, allows it to rest comfortably on the patient’s skin. To get microwave to the tumour, a transmitting antenna sends radiation through a water bag placed on the skin (without the water bag, the antenna would burn the skin). Computer simulations as well as clinical measurements done by the group have shown that it is very important to get rid of air bubbles in the water if the microwave is to reach deep tumours. But ‘degassing‘ systems for getting bubble-free water are expensive, either incorporating specialised bubble traps or filters, or degassing the entire liquid before use. And since less power reaches the target cells if bubbles are present, any ine ficiency in removing bubbles makes higher doses of radiation necessary to achieve the same e fect. Geetha’s work has led to a new degassing system that is much cheaper than existing ones. In

| Engineering Design


this setup, a pump circulates water through pipes while an electronic feedback system maintains the volume and temperature of water in the applicator. Equipped with sensors, the degassing system uses

A cost effective Inline degassing system Courtesy: Dr. Arunachalam

its control over flow rates to remove bubbles with a vacuum chamber. Because it can degas

a circulating fluid e ficiently and economically without disrupting the ongoing process, this invention has the potential to revolutionise a number of other medical procedures – dialysis being one case in point. Both medical imaging and therapy, as far as cancer is concerned, are fields dominated today by toxic chemicals and high-energy radiation. Technologies in both fields are far from ideal, but the goals are clear. Imaging needs to be as non-invasive as possible, with higher and higher degrees of accuracy. Therapy needs to be as specific as possible, with little or no e fect on healthy parts of the patient’s body. In the battle against cells that can look just like their healthy sisters, and invite widespread destruction with every e fort to kill them – but which cannot conceal their own heat footprint – there is little doubt that the future will see enormous contributions from microwave research. ⌅

Meet the Author Shivani Guptasarma grew up in Chandigarh, where she attended school at the Sacred Heart Senior Secondary School for girls and developed interests in all areas that are currently classified under the STEM subjects. She joined IITM in 2014 for a B.Tech. in Engineering Design and an M.Tech. in Biomedical Design. She is excited about her courses because they allow her to continue to study biology, maths, electrical and mechanical engineering, with prospects of someday developing insights and tools to help human beings.

| Immerse


Inside Healthcare Policy By Isha Ravi Bhallamudi Image credit: Prashanth NS via Wikimedia Commons

Much like research in Science, Humanities and Social Sciences research is multifaceted and can be approached from a variety of perspectives, methodologies and tools. To offer a glimpse into research in the social sciences, this piece explores the field of Health Economics and Public Policy through an interview with Prof. VR Muraleedharan from the Department of Humanities and Social Sciences at IITM.

| Humanities and Social Sciences


I

n a career spanning three decades, Prof. VR Muraleedharan (or Prof. VRM) has made enormous contributions to research on public health policy. In order to learn more about what policy research entails and understand the complexities involved in the field of healthcare policy, I found myself stepping into Prof. VRM’s o fice one sunny evening for an interview about his work and extensive research experience in this field; in particular, his several recent and very exciting projects.

Policy Research: The Big Picture Policy refers to a broad set of decisions, plans and actions undertaken to achieve specific goals in a region or nation. In the field of healthcare, formulating e fective policies is crucial as good policies can have profound e fects on the state of health in a particular area. For example, a policy subsidising contraceptives in HIV-prone areas could in the long run lead to drastically reduced rates of disease. To ensure that there is a high level of quality and access to healthcare, it is necessary to have well thought out, e fective healthcare policies. Coming to what policy research is: it involves examination of the design and process of policy making and implementation, evaluation of policy outcomes, and also an analysis of factors that constrain the e fectiveness of policy, including figuring out exactly how and why particular policies worked or may work under certain circumstances. “Normally, a policy is viewed as a black box: the interest lies in the inputs or elements that form the policy, and the output or outcomes of the policy”, Prof. VRM points out. “But working on policy research means being interested

in what goes on inside the black box, understanding the pathways and dynamics that make particular policies work. So, for e fective policy analysis, you have to open the box and correlate the two.”

To ensure that there is a high level of quality and access to healthcare, it is necessary to have well thought out, e fective healthcare policies. Is there any one broad theme that reflects the essence of policy-oriented research taking place across the projects that Prof. VRM has been involved in? “Our focus across several projects has centred on one single question, one I am very fond of”, Prof. VRM says thoughtfully. “The first part of this question is the realisation that every rupee spent on one person is a rupee denied to another – as someone working in the development field, this is a daily chant. But the second part is a critical economic question from the public point of view: is that rupee well spent, given that someone else is being denied it?” This really is a very interesting and tough question. To illustrate: the government budget this year for Tamil Nadu for health is about | crores – and this money was spent with a view to certain benefits and their distribution. “But what we would like to know”, Prof. VRM says, “is how it was distributed across di ferent socioeconomic spectra, the particular benefits of public spending on healthcare, how equitably the benefits of government spending are distributed, how it can be improved with a sense of equity and how much of the pie do the poor get in terms of benefits.”

In a research career spanning over three decades, Prof. VR Muraleedharan has held multitude of significant roles in academia, research, and the policy sphere including as Member of the Mission Steering Group of the National Rural Health Mission, Govt. of India; Senior Researcher for national and international bodies such as DFID, and full Professor since 2000 in the Humanities and Social Sciences department, IITM. He is also an IITM alumnus, having completed his PhD here in 1988.

| Immerse


A Deeper Look at Policy Oriented Healthcare Research Across Di ferent Time Periods and Regions A large part of Prof. VRM’s work has focused on studying the costs of, access to and coverage of healthcare in Tamil Nadu, especially by comparing healthcare interventions and health indicators in TN to those in other regions/states of India and countries, and using the research findings to cra t constructive healthcare policy. For example, one such project, carried out between and , is titled ‘Good Health at Low Cost, Years on: What Makes a Successful Health System?’. It carried forward a research project on comparing healthcare across a particular set of countries which was carried out in . years later, it seeks to analyse how and why each of these and other countries accomplished substantial improvements in health or access to services or innovative health policies relative to economically comparable regions or countries . In India, only the state of Tamil Nadu was studied because the scale of diversity in India makes it di ficult to generalise such a study for an entire country based on a few states.

We have several programs in India, targeted for particular diseases for example, and the same programs in every state. But some do better than the others. How does one explain that? This project necessitated analysing the past years of healthcare in Tamil Nadu and was carried out by Prof. VRM and Prof. Umakant Dash from the HSS department. This was done using their past extensive experience and research in the field as well as more than interviews with higher level o ficials who could explain policy changes, had worked at the district level before and were closely involved in the implementation of various programmes. Speaking to higher level o ficials who had worked in

di ferent states was the best way to get a comparative perspective relative to other Indian states and find out how TN made use of certain financing measures and central government program features to achieve a higher level of healthcare. The project found that Tamil Nadu had not spent lavishly on healthcare until and even a ter , when the National Rural Health Mission was instituted. “We wanted to look at the -year period before that to see how places that spent relatively little on health managed to bring out better health outcomes”, Prof. VRM explains. “We have several programs in India, targeted for particular diseases for example, and the same programs in every state. But some do better than the others. How does one explain that?” One way is to construct the story behind these events: all - central secretaries interviewed during this project gave the impression that “Tamil Nadu is good at seizing money fast when there’s a big pool of money for allocation.” But the other question to be asked is what percentage of the allocated money is spent e fectively, that could have positive impact on health outcomes. Many states don’t spend a high percentage, as e fectively, but instead underspend the allocated funds and attribute the relatively poor outcomes to a lack of capacity. “This is a very interesting point. Tamil Nadu spends relatively more e fectively than others. If % of the money is spent e fectively, and say % goes through other hands (meaning, down the drains), that % is still quite well spent. But it’s the other way round in other states, as we found a ter distilling our observations and interview responses over several years”, Prof. VRM explains. But this leads to a third, even more interesting point. “If I have spent % of the allocated money well, and you have spent % of your allocated money well, this % di ference in spending, cumulated over years, makes a huge di ference in terms of outcomes.” This di ference, if repeated consistently, and aided by other factors such as an e ficient bureaucracy, a diligent work ethic, supported by other systemic factors such as good roads and transportation, and media, add up to a cumulative di ference that counts for a lot. This

| Humanities and Social Sciences


o fers one explanation for the relatively positive health outcomes in the state.

Consortia for Comparative Research Comparing di ferent health systems to arrive at better policy practices for a particular region can be carried out in several ways. In recent years, Prof. VRM has been involved with at least two research consortia that seek to do just this. The first, the Consortia for Research on Equitable Health Systems (or CREHS), carried out comparative research between and in six countries – India, Nigeria, South Africa, Thailand, Kenya, and Tanzania – to generate knowledge on how to strengthen health system policies and interventions in ways that would “preferentially benefit the poorest”, such as by examining the impact of mobile health units on access to care. The second consortium, which evolved from CREHS, is called RESYST (Resilient and Responsive Health Systems), and aims to enhance the resilience and responsiveness of health systems to promote health and health equity and reduce poverty. Working in a consortium necessarily means that a lot of time is spent in sharing and discussing each stage of the research process across countries and teams. This involves regular meetings, time spent to structure the content and regularity of the meetings, arriving at common questions, developing a methodology and research instruments (the questionnaires) together, and interpreting and sharing the findings. Comparative studies take up a lot of time because all the teams involved have to reach a consensus on common questions that are meaningful in each country and are comparable across them as well, and they must establish clearly what each country gets out of the exercise. Each step of the research process for one team must be in tandem with the steps taken by the other teams, and it can be di ficult to maintain an equitable rhythm and balance while carrying out the work over a long period of time. There are internal checks and balances and timelines to ensure that the work proceeds relatively smoothly and sub-groups that

keep moving back and forth before arriving at a satisfactory conclusion. How does international comparative research help cra t good policy at country or state level? Prior experience shows that learning from the experiences of other countries as well as our own history helps construct e ficient and well-constructed infrastructure and delivery structures. “The impact of research on policy here is not a linear, direct or clear relationship because it is di ficult to predict exactly where, when and how research influences policy”, Prof. VRM remarks, “but we have very interesting ways of capturing this relationship.” One aspect of this is the engagement between researchers and policymakers, which builds gradually and takes o f over time. “For example, each of the hundred odd meetings and talks I have had with the government the past year is evidence of my physical and mental engagement – and its impact is di ferent from handing in policy reports that nobody has time to read anyway (even if they want to). It is important to find ways to engage as researchers with policy makers in your own way and style”, says Prof. VRM.

The impact of research on policy here is not a linear, direct or clear relationship because it is di ficult to predict exactly where, when and how research influences policy. Insights from Grounded, Participatory Policy Research However, policy oriented research work has its own complexities. One way to illustrate them is through Prof. VRM’s ongoing project on Universal Health Coverage (UHC), which seeks to pilot this concept in Tamil Nadu for the state government in two districts, one of which is Krishnagiri. In this district the research is being carried out specifically in the block of Shoolagiri. The project has been ongoing for around six months.

| Immerse


The Thottapattu sub-centre, or lowest unit in the healthcare delivery system (serving 5,000 - 6,000 people), in Tamil Nadu. (From right to left) Prof. Umakant Dash, the head nurse who manages the entire sub centre, and Prof. VRM (2013).

For policy to work out e fectively in practice, the research must also incorporate the psyche, ecology, terrain, geography and multitude of other factors surrounding the region. Piloting the UHC involves a large number of household surveys, facility surveys, group discussions and focus group discussions across villages and intense discussions with field functionaries. This is with the objective of collecting ground level knowledge of illnesses, learning the expectations of the villagers, and keeping track of the facilities that are currently functional on ground. “For this, the complete mapping has been done to first assess what is present on the ground. Next, it is important to find out how much people are spending out of pockets for healthcare (this was captured through a large survey of households) and recording health seeking behaviour of the villagers of the last one year, including for deliveries, prenatal, postnatal care, immunisation, access to public or private facilities, out of pocket expenditures for

various illnesses, and so on”, explains Prof. VRM. Further, state level consultations take place on developing an Essential Health Package or EHP, including its contents and ways to guarantee its distribution through a publicly financed system. “Coming up with the EHP involves an intricate set of negotiations which include consultations with state level o ficials, field functionaries and people living in the villages. Thus, the bottom-up views are collected along with the expectations residents have from the EHP”, says Prof. VRM. This material, which reflects people’s voices and their needs, is used to then reflect on what is doable and what is expected. It helps negotiate di ferent meanings and consequences of the EHP and proceed forward to arrive at a package that combines the needs and expectations of all in an equitable way. “For policy to work out e fectively in practice, research must also incorporate the psyche, ecology, terrain, geography and multitude of other factors surrounding the region”, Prof. VRM emphasises. For example, one peculiarity particular to Shoolagiri is that people speak three languages, with di ferent languages used for di ferent activities. Thus, taking this into account, particular areas and

| Humanities and Social Sciences


policy recommendations have to be treated with sensitivity: for example, as Prof. VRM argues, “you cannot place someone from Tirunalveli as a Village Health Nurse (VHN) into Shoolagiri; she would speak not just Tamil but a di ferent dialect of Tamil.” Thus, even just the process of recruitment in public systems is one that is fraught full of problems. Such issues may arise at di ferent parts of the research work or policy implementation and must be anticipated (or at least, mechanisms for swi t redressal conceived) in order to ensure smooth functioning of policy.

South India under a renowned economic historian, Prof. S Ambirajn, who taught at HSS IITM from till . Prof. VRM’s thesis, and subsequent research, was based on archival work. A ter a one year sabbatical at Harvard in the s, Prof. VRM’s focus shi ted more towards more recent policy, and this has shaped his current research interests and work. In a similar vein, he also enjoys guiding research scholars from diverse disciplines, though the general rule is that scholars whatever be their disciplinary background, should have an interest in public health policy issues.

Research Narrative: Coming Full Circle

Speaking about how all facets of policy research tend to come together, Prof. VRM says, “Right now, we are working towards using our work in RESYST to help inform the UHC project; especially for increased exposure.” In fact, the UHC project itself has also tied into yet project funded by USAid, where research institutions in India are trying to pilot UHC in eight Indian states (and this is just one component out of three, of this project). “By now, it’s di ficult to say what one thing I am working on by way of a funded subject, because as you can see all these projects are an organic evolution, and are connected in essence.” And as Prof. VRM points out, “You know, despite doubts about how meaningful your work is, you continue your research and keep pushing the frontiers of policy studies through your engagement with policy makers . . . and, naturally, there is no end to this process.” ⌅

My last question is one that perhaps should have been the first. How did Prof. VRM find himself in this field? “A ter completing an MA in Economics from BITS Pilani followed by a two year break”, Prof. VRM says, “I found myself engaged in two research projects across Maharashtra and later as a research assistant on a project on assessing PHCs in Orissa in . Around this time, aided by two wonderful research guides, I travelled all across five districts of Orissa and developed an interest in healthcare. Studying healthcare systems in Orissa in that time was very tough, and that project, the field work and the travelling stimulated my interest in the field. So I really owe a lot to Orissa!” I am surprised to find out that Prof. VRM is an alumnus of IITM, having completed a PhD here on the history of healthcare in

Meet the Author Isha Bhallamudi is a fourth year Integrated M.A. student of the Humanities and Social Sciences majoring in Development Studies. She has been involved in writing about research and innovation through Immerse and T5E. Her research interests lie in policy research and its cross connections with health, poverty and gender. Isha can be reached at b.isha.ravi@gmail.com for comment, criticism or discussion!

| Immerse


Born an Entrepreneur

by Ananth Sundararaman Dr. Muhammad Yunus has said multiple times that “All humans are born entrepreneurs.” The revolutionary idea of Grameen bank has brought millions of people, employed in small enterprises such as farmers, out of poverty. Dr. Arun Kumar and Dr. Suresh Babu at IIT Madras are trying to synthesise available evidence on the links between microfinance and poverty.

I

t was when Bangladesh was facing one of the worst famines in history. A ter quitting his job as a deputy chief of the General Economics division in the government’s planning commission, Dr. Muhammad Yunus was serving as the head of the economics department at Chittagong University for a little over a year then. It was during this period that the inspiration for Grameen Bank came to Dr. Muhammad Yunus in the form of a trip to the village of Jobra in Bangladesh. What began by helping a woman struggling to make ends meet as a weaver of bamboo stools soon became a model to assist families avoid the high interest rates under predatory lending. While allotting small loans of $ to a group of families as start-up money, little did he realise that he was laying the foundation for the now familiar Grameen Bank and what would go on to be hailed as a ‘miracle

cure’ for global poverty – microfinance. He went on to win the Nobel Peace Prize in for his e forts in creating one of the world’s most high-profile and generously funded development interventions. “Microfinance to a layman is basically a community initiative to induce very small savings for the people by pooling their savings and lending within the group”, says Dr. Arun Kumar, a professor in the Department of Management Studies (DoMS). Dr. Arun Kumar, along with Dr. Suresh Babu from the Department of Humanities and Social Sciences, has spent a considerable amount of time in the past two years visiting rural villages across various states in India. The primary aim of microfinance is to help poor people having minimal experience in dealing with banks achieve a sustainable source of cash flow on a monthly basis. Aimed at people employed in small enterprises such as petty shops, tailoring and even farmers especially in rural areas, this concept

| Management Studies


Dr. Arun Kumar G is currently engaged as a professor in the Department of Management Studies, IIT Madras, and is involved in teaching Corporate Governance, Financial Accounting and Mergers & Acquisitions. His research interests lie primarily in Development Finance and Joint Ventures & Alliances. He has co-authored two textbooks on Management Accounting and can be reached at garun@iitm.ac.in

has been a breakthrough in unlocking immense opportunities for them. “It has even gained traction in urban areas, and also involves a gender component”, informs Dr. Suresh Babu about the active participation of women and their role in self help groups.

Microfinance is basically a community initiative to induce very small savings for the people by pooling their savings and lending within the group. The concept of microfinance is to find collaborative ways to meet the needs of a group, primarily through creating and exchanging cash within the group. Consider a group of women, and let us say each of them contribute | towards pooling their savings in the group every month. This total amount of | can be given as a loan to one of the members to conduct her business under the condition that in the second month while all the other members contribute | each, she repays an installment of her loan by contributing | with interest on top of her monthly contribution. In the second month, the group now has a little over | which can be given as a loan to some other member in the group. By doing this on a rotating basis, the group continuously increase their savings and also keep a check on each other so that they do not fail to repay the amount. This model has been a great success in combating the fact that a large part of the rural population is not a part of the formal financial system. By eliminating the middleman, borrowing

from a money lender at high interest rates and being trapped in a vicious circle of debt for generations is no longer their only option. By being a part of this kind of a group an individual can manage to raise a sizeable amount as loan without collateral and at a reasonable rate of interest. The fact that the borrowers paid Muhammad Yunus back in full and on time spurred him to start travelling from village to village, o fering more tiny loans and cutting out the middlemen. Dr. Yunus was determined to prove that lending to the poor was not an ’impossible proposition’, and Grameen bank adopted its signature innovation: making borrowers take out loans in groups of five, with each borrower guaranteeing the others’ debts. Thus, in place of foreclosure (banks selling the property of the loanee to recover the debt they are owed) and a low credit rating that usually defines borrowing from a bank – Grameen depends on an incentive at least as powerful for poor villagers: the threat of being shamed before neighbors and relatives. Dr. Arun Kumar and Dr. Suresh Babu have undertaken research on this financial system over the past year, in collaboration with the Department for International Development (DFID), UK and Hand-in-Hand, a non-governmental organisation (NGO) headquartered in UK. Hand-in-Hand was in fact started in Tamil Nadu by Dr. Kalpana Shankar, the wife of a district collector in Coimbatore with the help of Dr. Percy Barnevik, a Swedish business executive and philanthropist. Interestingly, Dr. Kalpana earned her PhD in

| Immerse


A microfinance meeting in progress in Kerala; Image Publicly available via Wikimedia Commons

nuclear physics and had little training in finance and economic development. It is today present in over ten countries across the world including Afghanistan and Lesotho. Their mission is to work for the economic and social empowerment of the poorest and most marginalised population by supporting the development of businesses and jobs. They receive funding from a number of di ferent sources including individuals, corporations, bilateral and multilateral institutions, and trusts and foundations. As of , Hand-in-Hand India had created . million jobs, , Citizens’ Centres and covered over million households under its various programmes.

The terms of the project were to synthesise available evidence on the links between microfinance and poverty. “There were two factors that led to the both of us joining forces to work on this area”, says Dr. Suresh Babu. “He (Arun) works in finance while I am interested in development processes. Microfinance provided an overlap in terms of finance and

development and we discovered that there is some potential research that can be conducted together in this area.” They had individually worked on previous assignments for Hand-in-Hand and decided to collaborate on research when a new assignment on microfinance was o fered. The terms of the project were to synthesise available evidence on the links between microfinance and poverty. “We were assessing if the investments into the NGO were yielding the results they were expected to and if there is a social return on those kinds of investments”, says Dr. Arun. The NGO assisted them with their field based research in visiting di ferent villages from Pali in Rajasthan to Cuddalore in Tamil Nadu where they had a presence. They looked into various cases of self-help groups and communities which received support from the NGO to create a system of lending and borrowing among the community. “We came across di ferent scales of operation during our field visits. One of the most successful cases is ‘Kudumbashri’ in Kerala.” A female-oriented, community-based, poverty reduction project that was initiated by the Government of Kerala,

| Management Studies


‘Kudumbashri’ receives regular aid and assistance in conducting their activities and increasing their reach from the government. Their strategy is one of forming women’s collectives in di ferent villages and provide skill upgradation and training sessions. Small savings generated at the families are pooled at various levels as thri t and used to attract credit from banks to support micro-enterprises for sustainable economic development. One of the key reasons for its success is cited as the ability of any woman to become involved with the organisation irrespective of whether she is below or above the poverty line. By conducting thorough background checks, the need for a voter ID card or a valid identity proof becomes unnecessary and thus paves the path for the wide impact that Kudumbashri has managed to create. There have also been individual instances of success stories that have been detailed in the research. While the ultimate goal of this system is to generate employment and sustain local entrepreneurial businesses, this may not always be the case. By visiting di ferent villages across the geography of the country and covering a wide range of specific industries from the , villages where the NGO operates, they concluded that while the model leads to a tangible benefit for borrowers, there may be specific situations when the system may not

work. For instance, money borrowed to pay tuition fees for children would lead to a case where the money borrowed does not have an immediate return on investment. This may lead to a situation where the model may break down as the resources pooled in by di ferent members help only a few individuals while the rest would have to wait for a long time to see the amount be repaid. Both Dr. Arun and Dr. Suresh have spent a lot of time in analysing di ferent types of SHGs and have compiled a report detailing the complete spectrum of the impact of microfinance in helping eradicate poverty in India. Moving forward, the goal is to conduct research to understand more about the consumption of these loans by the people taking them. The aim of microcredit is, a ter all, to teach the financially disadvantaged the basic financial principles they need to sustain the growth that is initiated by SHGs. While it’s not a one stop tool for the eradication of poverty, it is definitely an important precondition to economic development. In the words of Dr. Yunus, “All human beings are born entrepreneurs. Some get a chance to unleash that capacity. Some never got the chance, never knew that he or she has that capacity.” ⌅

Meet the Author Ananth Sundararaman is a third year undergraduate student in the Department of Civil Engineering at IIT Madras. He is also pursuing a Minor in Economics. He spends his time between his hobby of quizzing and watching Liverpool play, come the weekend. A believer in the underlying philosophy of Asterix, his biggest fear is the sky falling on his head.

| Immerse



P

erhaps one of the most important aspects of society is communication. It’s what allows diverse populations from across the world to cooperate and socialise, and ideas and opinions to spread around the globe and make us a truly global community.

The Allied e forts to crack Enigma, largely aided by the work of Alan Turing, marked the beginning of the era of computers - an era that would see cryptography mutating into a well-defined field of study. From writing on papyrus scrolls to sending messages instantly around the world, methods of communication have evolved rapidly over the years. And yet, a concern that has never ceased to exist is that of privacy. The need to keep important messages safe from prying eyes and ears resulted in the field of study now known as cryptography. The word conjures to mind images of hackers locked in battle with cryptographers, attempting to ferret out secrets of international import. But cryptography has far older origins than one might think. The oldest use of codes can be traced to Egypt in around BCE, where non-standard hieroglyphs were found carved into stone. Since then, codes and ciphers grew progressively complicated, from the Caesar Cipher employed by Julius Caesar (simply shi t every letter of the alphabet to the le t or right by a fixed number of letters) to the supposedly unbreakable Enigma employed by the Germans in WWII. The Allied e forts to crack Enigma, largely aided by the work of Alan Turing, marked the beginning of the era of computers - an era that would see cryptography mutating into a well-defined field of study. It is in this field that Dr. Santanu Sarkar, from the Department of Mathematics, works. His research considers encryption, as he says, “from the attacker’s point of view.” It concerns the use of

mathematical constructs called lattices in attempts to break the RSA cryptosystem, one of the most common encryption methods in use today. Before explaining his research, Dr. Sarkar outlines the history of modern cryptography. “The foundations of modern cryptography were laid in , when Whitfield Di fie and Martin Hellman published a revolutionary paper.” This paper outlined a new concept – the public-key encryption system. Till that point, cryptosystems used symmetric-key encryption. This meant that both the receiver and sender of information used a single, shared key (a term for a large number used in the encryption process) to encrypt plaintext (unencrypted information) and to decipher ciphertext (encrypted information). This necessitates the use of a secure channel for the key to be shared between the sender and the receiver. But the necessity of a secure channel in order to set up a secret key was a practically insurmountable chicken-egg problem in the real world. Di fie and Hellman’s paper, on the other hand, posited that a shared key was not necessary. Instead, they proposed that two keys be used - a public key and a private key. The public key would be available to anyone who wanted to communicate securely with a system, while the private key would be known only to the system itself. Encryption would be carried out with the public key and decryption with the private key. It was also mandatory that the private key not be deducible from the public key, as that would compromise the system’s security. Since the private key didn’t need to be shared with anyone via a potentially insecure channel, public-key encryption was clearly a better choice. Although Di fie and Hellman were able to prove the feasibility of such an encryption system, they weren’t able to come up with a viable system themselves. That was accomplished two years later, by Ron Rivest, Adi Shamir and Leonard Adleman, researchers at MIT. That eponymous cryptosystem ‘RSA algorithm’ has since become one of the strongest known encryption standards in the world.

| Immerse


Dr. Santanu Sarkar is currently an Assistant Professor in the Department of Mathematics at IIT Madras. He was previously a guest researcher at the National Institute of Standards and Technology. He received his PhD from the Indian Statistical Institute, Kolkata. His main research interests include cryptology and number theory.

It is now used mainly as part of hybrid encryption methods: data is encrypted using a symmetric-key system, and the shared key is then encrypted using RSA. This is largely because of the RSA algorithm’s computational ine ficiency – encrypting the data itself using RSA would take a very long time. At a basic level, the RSA algorithm is based on the premise that the product of two large prime numbers is very hard to factorise. Put into mathematical terms, consider prime numbers, P and Q, and their product, N . Then, the integer solutions to the equation p(x, y) = N xy are the factors of N . Trivial (irrelevant) solutions to this equation include (x = 1, y = N ) and (x = N, y = 1). The important solutions, though, are (x = P, y = Q) and (x = Q, y = P ). While a computer can solve this equation relatively fast when N is small, larger values of N result in runtimes that render decryption attempts infeasible. For example, a -bit value of N (i.e, 1024 N ⇡ 2 ) will take approximately 10211 years to factorise. To put things in perspective, the age of the universe is around 1010 years! In cryptographic terms, here N is the public key and P and Q are the private keys. At this point, Dr. Sarkar mentions a caveat. “RSA encryption can’t be broken by conventional computers. However, there is an algorithm, called Shor’s algorithm, that can be used to break RSA encryption using quantum computers.” But since quantum computers are still in nascent stages of development, the RSA algorithm is still considered to be a bastion of cryptography. The mathematical framework above describes the first and simplest RSA variant. There have been

others proposed since then, and it is on one of these variants that Dr. Sarkar works. This variant considers the equation ed = 1 + k(N + s). e and N are known (and hence, are public keys) and d, k and s are unknown (and hence, private keys). This can be expressed as a polynomial p(x, y, z) = ex 1 y(N +z). As before, obtaining the non-trivial solutions of this polynomial is equivalent to breaking this particular variant of the RSA encryption.

At a basic level, the RSA algorithm is based on the premise that the product of two large prime numbers is very hard to factorise. Decrypting any variant of the RSA algorithm was considered infeasible until Don Coppersmith, an American mathematician, established a theorem related to the factoring of numbers. When used in conjunction with mathematical constructs called lattices, it was found that the RSA algorithm could be broken in polynomial time (This means that the running time is a polynomial function of the input size. It’s largely used to denote programs whose running times don’t blow up too fast). Fortunately for cryptographers around the world, the guarantee of success for such an attack was attached to certain conditions. The encryption could be broken in a feasible time scale only if d, one of the private keys, was less than N 0.292 - which implies that for a secure RSA design, d would have to be greater than N 0.292 . But Dr. Sarkar prefers to think of it as an upper bound for the system to be vulnerable, rather than a lower

| Mathematics


A schematic explaining how public-key encryption works. Image source: Wikimedia Commons

bound for security. “I always look at the problem from the attacker’s point of view”, he says with a smile. “Hence, I think of values of d for which the system is insecure.” This bound was proven by two cryptographers, Dan Boneh and Glenn Durfee in . For example, if N was a -digit number, the concerned RSA system would be secure as long as d was a -or-more digit number. However, the mathematical community conjectured that for the RSA system to be truly secure, d would have to be greater than N 0.5 . Using the example from before, d would have to have more than digits.

on a further variant of this RSA scheme. “N doesn’t have to be only a product of two prime numbers. It can instead be of the form N = P r Q, where r is another integer. I worked on proving bounds on d when r = 2.” Dr. Sarkar’s work improved the bounds on d from d < N 0.22 to d < N 0.395 . Talking about the implications of his work, he says, “The results published will prompt RSA system designers to revise their designs. Since there is a larger range of d over which RSA can be broken, systems will have to be designed with the new bounds in mind.”

I always look at the problem from the attacker’s point of view. Any increase in the bounds on d would have two consequences. First, it would expose any RSA systems that used values of d below the new bound as insecure. Secondly, an increase in the value of d in any RSA system results in a significant increase in the time taken to decrypt it using Coppersmith’s theorem. Hence, improving the lower bounds on d contributes greatly to improving the security of RSA systems used across the world. Dr. Sarkar goes on to explain that he worked

A lattice, rendered in Sage. Image by the author

I mention to Dr. Sarkar that his work seems highly theoretical. He’s quick to point out that it does

| Immerse


involve some simulations – he runs attacks on RSA variants using an open-source so tware called Sage. This serves to validate his results. “My work, and in fact all work since Boneh and Durfee’s paper, involve some implicit mathematical assumptions that no one’s formally proved. I need to run actual attacks in order to validate the bounds I derive.” But, he admits with a rueful grin, “It can get tedious at times. You just have to keep trying the problem from di ferent angles. I also like what I do.” When I ask him how he decided to venture

into cryptography, he points to his alma mater, ISI Kolkata, as his inspiration. “ISI is well known for cryptography. Once I started working in this field, I saw problems of this type, and they interested me. I still work with colleagues there, as well as with collaborators in China.” Dr. Sarkar is currently attempting to improve the bounds described above even further. He’s also working on problems related to another encryption algorithm, RC , primarily employed in wireless networks. ⌅

Meet the Author Nithin Ramesan is a B.Tech. student of Electrical Engineering. He likes to quiz, write and read Calvin & Hobbes. For bouquets or brickbats, he can be contacted at nithinseyon@gmail.com.

| Mathematics



A

marathon runner has persistent pain in his knee that leaves him unable to walk. He sees an orthopaedic specialist at a hospital and undergoes a knee replacement surgery, which takes all of one day. He can now perform any activity he did before and wins that marathon he was training for. A happy ending - the story advertised by every bone and joint specialty hospital. What they don’t talk about, though, is that it takes several weeks to design and manufacture the implant that’s tailored to replace his knee precisely. Moreover, these implants have an average lifetime of only about years, and will have to be replaced by another surgery. That it takes him several weeks of physiotherapy to regain his full range of motion and even then he shall experience chronic pain is an issue that is conveniently ignored.

They envision a cost-e fective instrument integrated with a diagnostic tool that designs and manufactures the part to be

The high loads that orthopaedic implants need to support restrict the selection of feasible materials. Today, stainless steel, cobalt, chromium and titanium alloys have been successfully used to fabricate implants because of their strength, comparatively low sti fness, light weight and relative inertness. However, these implants release toxic metallic ions.

Moreover, analysis of metal or

metal alloy devices provides convincing evidence that implant failure is because of a mismatch of the mechanical and the chemical properties of the implants with the bone at the bone-implant interface. This mismatch leads to the formation of a fibrous layer of tissue at the interface, giving rise to small gaps which cause movement at the interface. Ultimately, this causes a failure of the implant and requires subsequent surgeries to replace the loose implant. One approach to alleviate this problem has been the use of CaP coatings applied to the implant surface.

This enables researchers to consider

materials with attractive properties that have earlier been rejected for their lack of biocompatibility.

replaced. Total Joint Replacement is seen as the biggest success story of orthopaedic surgery. It has helped hundreds of thousands of people regain or maintain their functional independence and live fuller, more active lives. However, the surgery is prohibitively expensive, rendering it out of reach for an estimated million people who su fer hip, knee and shoulder joint failure every year. Dr. Soundarapandian and his team from the Department of Mechanical Engineering are determined to change this. They envision a cost-e fective instrument integrated with a diagnostic tool that designs and manufactures the body part to be replaced. A patient with joint failure walking into a hospital will go through the rigmarole of diagnosis to surgery in a few hours. As a first step towards this dream, they are working on eliminating the shortcomings of existing bone implants by synthesising a magnesium implant with a calcium-phosphate (CaP) coating by a laser-based coating technique.

The

CaP

mineral

Ca5 (P O4 )3 OH,

has

hydroxylapatite attracted

(HAP),

considerable

attention because of its close resemblance to the chemical and mineral components of teeth and bone.

As a result, HAP is biocompatible

with bone.

Instead of forming a fibrous tissue

layer at the implant-bone interface like normal biomedical alloys, implants with HAP coating have been shown to form a thin layer bonding with the bone and even promoting bone growth. Plasma spraying is the most popular and the only Food and Drug Administration (FDA) approved method for applying CaP coatings to implant surfaces. This process involves the high-velocity spraying of molten HAP powder onto an implant surface. Upon impact with the substrate, the material rapidly cools and forms a dense coating with a morphology consisting of layers of HAP impact splats. Coatings synthesised by this method form a dense, adherent layer of CaP on metal substrates.

| Mechanical Engineering


Dr. Soundarapandian did his Ph.D. in Mechanical Engineering (2010) at Southern Methodist University (SMU), Dallas, USA followed by Postdoctoral research at University of North Texas (UNT), Denton, USA. Currently, he is an Assistant Professor of Mechanical Engineering at IIT Madras. His research focuses on synthesis and characterisation of structural and bio-materials, LASE, computational modelling, manufacturing automation, fabrication of next-gen bio-implants, and laser applications in medical industry.

While plasma spraying is a well-understood

of natural bone. However, due to the corrosion

process, the control of variables is quite complicated.

of Mg in physiological environments, it cannot be

The extremely high temperatures

(10, 000o C

to

used directly. The solution proposed by the team

12, 000o C) used in the plasma spray process can

is deploying HAP coatings on Mg implant surfaces

vastly a fect the properties of the final coating

using a laser-guided manufacturing technique.

and result in potentially serious problems such as

This exploits the bio-compatible and bone-bonding

the coating of complex implant devices containing

properties of the ceramic, while using the superior

internal cavities. More serious is the potential for

mechanical properties of Mg implants.

the formation of amorphous CaP phases with a Ca/P ratio between .

and . in the film rather than

stoichiometric HAP which has a Ca/P ratio of . . There is also concern over alteration of the coating structure.

In addition, spraying plasma to coat

within the pores of porous metal materials proves di ficult because it is a line-of-sight process.

Additive manufacturing caught the professor’s eye during his Master’s in mathematical modelling at Blekinge Institute of Technology, Sweden where he developed the modelling technique to predict the right manufacturing process given the required geometry and material. However, typically, additive manufacturing isn’t intended to accommodate materials with dissimilar properties. Bones are a composite of both organic and inorganic materials. This implies that exactly mimicking a bone would require materials with disparate properties and existing additive manufacturing techniques were thus inappropriate for the task at hand. Further, bones have a porous geometry which must also be mimicked by the implant in addition to being compatible with the bone environment, a property called osteo-integration.

This exploits the bio-compatible and bone-bonding properties of the ceramic, while using the superior mechanical properties of Mg implants. To address all these concerns, the research

A detailed schematic of the deposition process

group has developed a new instrument that can

During his PhD training, Dr. Soundarapandian

accommodate metals, ceramics and polymers.

identified Magnesium (Mg) as a suitable alternative

The novel technique involves a commonly used

because its mechanical properties are closer to that

D printing method called Fused Deposition

| Immerse


The different components of the 3D printing lab.

Modelling.

The process basically involves a hot

The experiments performed in association with a

air gun controlled by a robot arm that zig-zags

collaborator showed increased adhesion of bone

back and forth depositing layers of powdered

cells (osteoblasts) to the implant surface.

HAP mixed with a polymer mixture that promotes

subsequently worked on tweaking several factors

binding to the surface of the metal. This surface

such as surface chemistry and topology to enhance

is air-dried and then bombarded with a laser at a

adhesion. Surface chemistry was altered by using

pre-characterised energy density that minimises

several polymers proven to increase cell adhesion.

the corrosion rate for a given combination of

The surface roughness was also altered to study

materials. Lasers are very precise and powerful and

e fects on cell adhesion.

They

the process is a non-contact process and hence ideal

Not only was it bio-compatible, meaning

for bio-implants. The next step in the process of designing a novel implant is a set of rigorous tests.

The

first step to ensure bio-compatibility is a set of in vitro tests.

it wasn’t harmful to the body, but it also turned out to promote bone growth.

In this step, you immerse the

Currently, the team is busy ensuring that the

implant in artificially developed bio-fluids that

method, the instrument and the process can be

mimic the physiological conditions and study them

used for wildly di ferent materials including several

over several days.

The implants manufactured

bodily derived materials. One such bodily derived

by the research group passed this stage and they

material being considered is Fibrin, a fibrous protein

noticed an interesting e fect.

Not only was it

involved in blood clotting. It’s a tough, resilient

bio-compatible, meaning it wasn’t harmful to the

material that has properties very similar to that

body, but it also turned out to promote bone

of cartilage.

growth. This led to another set of tests to study

and are currently working on fabricating implants

cell behaviour, particularly adhesion to the implant.

from it. As with any biomedical device, ensuring

| Mechanical Engineering

They have already extracted fibrin


reproducibility continues to be a major challenge

for collaborators to carry out some of the biological

and they’re working on verifying and validating their

tests. He says that the next stage will take

methods for the same. The next step will be to

years a ter which he would be allowed preliminary

use these implants in animal models for in vivo

human trials in his long haul to see his dream come

studies. Dr. Soundarapandian is currently looking

to fruition. ⌅

All Images are courtesy of Dr. Soundarapandian.

Meet the Author Aparnna Suresh is a final year student of Biotechnology at IIT Madras. While not holed up in the lab dreaming about creating a Jurassic Park, she enjoys quizzing, reading and swimming. Her long term goal is to pursue a career in the academia and her research interests include synthetic and systems biology, and biological computing. She can be reached at aparnnaa.suresh@gmail.com.

| Immerse

to



millimetres is just the right size to do something big, claims Apple. Their unusually sleek iMac which is just mm thick, is as awe-inspiring to a material scientist as to a gadget geek, because joining the ultra-thin monitor panels is a challenging material engineering problem. Which is why, Apple overruled conventional metal welding processes and used a relatively new approach called friction-stir welding, creating a product enclosure that was too thin and seamless to take apart.

compositing very di ferent materials. Prof. Ranjit Bauri, one of the faculty members in the group, describes his work as ‘surface engineering’. He uses a method called friction-stir processing to enhance hardness and wear resistance of metal surfaces. The apparatus is as basic as shown in the image below:

Since ancient times, humans have known that thermal and mechanical processes can be used to morph materials to our needs Friction-stir welding is a process where pressure and friction-generated heat are used to join materials. Since ancient times, humans have known that thermal and mechanical processes can be used to morph materials to our needs – liquid water can be solidified on cooling, carbon can turn into diamond at high pressures and temperatures, and sheets of metals can be joined when their edges are melted by hot flame. Friction-based processes are similar, albeit they use friction to generate heat under intense pressure (imagine a crushing kilos weight supported on an area equivalent to two finger tips). The study of these processes is one of the main focuses of the Materials Joining group at IITM’s Metallurgical and Materials Engineering department. The group uses friction to achieve a variety of things – from improving the strength of metals, to coating their surfaces and welding or

Schematic of friction stir processing setup. Courtesy: Dr. Ranjit Bauri

The vertical tool, which is under intense downward pressure, rotates and translates along the metal substrate. This produces frictional heat and local mixing at the interface, causing what’s called a plastic deformation of the material. Plastic deformation is almost like flow in solid state. Flow in solid state may seem counter-intuitive but this phenomenon of plasticity is universal in our daily life. Whenever we pound a bar of iron, the bar gets permanently deformed, literally ‘flowing’ into its new shape without melting. Or even when we iron our clothes which are typically made of polymers, the creases flow out.

Materials Joining Laboratory at IIT Madras comprises of Prof.

Gandham

Phanikumar, Dr. GD Janaki Ram, Dr. Ranjit Bauri, and Dr. Srinavasa Rao Bakshi from the Department of Metallurgical and Material Science Engineering. Their research interests span surface engineering, microstructure analysis, additive manufacturing, welding and welding simulation, study of composites and alloys. Housed in a large Central Workshop bay, they use a wide array of testing and analytical tools to investigate and improve material behaviour.

| Immerse


Metals, which are the consideration of the study here, get such plasticity from their grainy internal microstructure. One way to understand microstructure is to look at metals as not one big solid slab, but as made of many microscopic interlocking polygons. Each of these polygons is a ‘grain’ and shares boundary with other neighbouring grains in three dimensions. Sizes and boundaries of these grains strongly a fect almost all industrially useful properties like strength, ductility, hardness, corrosion resistance and wear-resistance of the material. Hence, understanding and improving microstructure of metals has been the holy grail of the metallurgical sciences.

nickel (Ni) fit this bill, as nickel gives higher hardness to a widely used, low density metal like aluminium. Now, the easiest and crudest way would be to melt the metals and mix them. But melting them to make an alloy results in formation of brittle intermetallic compounds like Al3 N i. These unwanted compounds arise because energy supplied to melt the metals also makes chemical reactions between them thermodynamically feasible. One way to overcome this trouble, as this group discovered, is to make the composite using a solid-state process like friction stirring. The second metal can be introduced into the ‘stir zone’ in a variety of ways and embedded into the other metal’s surface by the movement of the vertical rod tool.

Composites are an immensely useful class of materials as they give us new properties – say, a mix of strength of one metal and low weight of the other.

Grain structure of aluminium obtained using. Electron Backscattering Diffraction, Courtesy: Dr. Ranjit Bauri

Friction-stir processing (FSP) is one such process that helps refine the grain size, says Prof. Ranjit. When this method was discovered in late nineties, the group here was the first to apply the process to make surface composites. Composites are an immensely useful class of materials as they give us new properties – say, a mix of strength of one metal and low weight of the other. Aluminium (Al) and

The end product in case of aluminium and nickel is a metal-metal composite that’s three times harder than aluminium on the surface. This means it can resist wear more e fectively. “The beauty of the process though”, as Prof. Ranjit puts it, “is that it doesn’t decrease the ductility, which is the ability of aluminium to be drawn into wires, too much; we are able to retain percent of aluminium’s ductility.” This is a big deal because there is o ten a trade-o f between strength and ductility in conventional strengthening processes. Prof. Janaki Ram, another faculty member in the group, has achieved similar results with metal-ceramic composites despite ceramics being a completely di ferent class of materials from metals. Making composites though is just one of the multitude uses of friction-related processes. Prof. Janaki Ram is also keen on applications of friction-related processes in additive manufacturing. Additive manufacturing or D printing is a computer operated layer-by-layer manufacturing of an object. This is unlike in a

| Metallurgical and Materials Engineering


normal setting, where di ferent parts of an object are casted first and then welded together. In a journal paper in , this group was the first to propose that friction surfacing, a process very similar to friction-stir processing, could be used for layer-by-layer manufacturing of metal objects. The only di ference, in fact, between friction stir processing and friction surfacing is that the rod tool used in the latter is consumable. While traversing the substrate, the rod tool itself significantly so tens at the interface because of high temperature and pressure. This leads to establishment of metallurgical bonds between metal atoms of the rod and the substrate, causing some material to come o f the rod and deposit on the surface of the substrate as one layer in a step-by-step layer addition process.

Six-layer cylindrical deposit consisting of a fully enclosed internal cavity and its X-ray radiograph, Courtesy: Dr. GD Janaki Ram

Friction surfacing’s biggest advantage is in the fact that it is a solid state process. This means that it is suitable for use with dissimilar materials, which would say, be incompatible with each other in melt state. There are many ideas-in-waiting for products using dissimilar materials – like a turbine with the input end made of a material optimised for heat resistance, and the output end optimised for strength. Or a bottle with a magnetic bottom to be held in place by a magnetic holder.

Close-up of Friction Stir Welding. Image source: TWI, via Flickr

The possibilities are unlimited. One of the more established uses of friction is the friction-stir welding process used in building products like Apple’s iMac, NASA’s rovers or more traditionally, aerospace components produced by the likes of Boeing and Airbus. Invented in s by The Welding Institute in UK, friction-stir welding was competitively patented until recently, closing o f avenues for external research, says Prof. Gandham Phanikumar, another faculty member in this group. Once the patent expired, research opened up and techniques like friction processing and surfacing were proposed as a modifications of the original welding process. While these techniques are yet to be undertaken on a large commercial scale in India, organisations like Naval Research Board are looking at possibilities of using friction welding and surfacing for in situ repairs or application of coatings for marine vehicles.

...this group was the first to propose that friction surfacing ...could be used for layer-by-layer manufacturing of metal objects. Reading through the PhD thesis of H Khalid Rafi, who worked in this group and graduated in , one gets the idea of immense potential of friction-based processes to serve as an alternative to the conventional techniques. In fact, one of his papers on friction welding aluminium alloys has

| Immerse


already been cited over fi ty-five times in five years. While there is a still long way to go, the group hopes

to continue drawing more insights into friction and its applications in material processing.⌅

Meet the Author Raghavi Kodati is a senior undergraduate student in the Chemical Engineering department, whose research interests are in microfluidics and materials. While working on this article, she got fascinated by the history of material joining processes – from their use in iron pillars in ancient India to today’s aluminium-lithium SpaceX rockets. Excited about science writing, she has written for three issues of Immerse.

| Metallurgical and Materials Engineering



W

ater – one of the basic necessities for life, holds secrets that never cease to astonish researchers. Its liquid form is denser than the solid form. It expands both when heated to 4o C and cooled to 4o C. It is a universal solvent, dissolving a large variety of substances. In addition to all peculiar qualities of water, researchers have now understood that water at freezing temperatures and high pressures can store certain gases too. Dr. Jitendra Sangwai, a Professor at the Department of Ocean Engineering, IIT Madras has done research that reveals this unique aspect of water.

Clathrate hydrates were looked upon as obstacles to the flow of natural gas in pipelines until a Russian petroleum engineer turned professor, Yuri F Makogon discovered in that clathrate hydrates can be used as source of energy by extracting the natural gas that has accumulated inside them. By , he had estimated the amount of natural gas in hydrates present worldwide and paved the way for present researchers to find ways to extract it and understand the behavior of hydrates.

Water at very high pressures ( to bar) and o o chilling temperatures (0 C to 10 C) turns into a new form of ice-like crystalline structure. This crystalline structure has small gaps in it making room for gas storage. This crystalline structure together with gas is known as a clathrate hydrate or a gas hydrate. A gas hydrate then, is just a cage made of water. Most low molecular weight gases such as hydrogen, Molecular structure of gas hydrates. Courtesy: MIDAS (Managing Impacts of Deep Sea Resource Exploitaiton) oxygen, nitrogen, methane (natural gas), carbon dioxide, and hydrogen sulphide can be trapped The oceans are a conducive environment for the inside the cage, but each gas needs di ferent pressure formation of methane gas hydrates. At the depths and temperature conditions for this to happen. Each where hydrates are found, pressure is very high hydrate derives its name from the gas present in the due to the sheer height of water above, and the cage. So if methane is present in the cage, then temperature is low as the sun’s rays can’t penetrate it’s called methane hydrate. With this interesting such depths. Vast reservoirs of methane hydrates property of water, harmful gases can be stored or are found in marine sediments, at depths greater useful gases present in the existing hydrates can be than meters, close to continental margins and extracted. in onshore permafrost – soil, rock or sediment that is frozen for more than two consecutive years. The property of water transforming into cages Availability close to continental margins means and trapping gases was discovered by an English reduced extraction and production costs, without chemist, Joseph Priestley in the th century and having to spend on deepwater drilling which is a by another English chemist, Humphry Davy in costly and risky a fair. This is a boon for India, the th century independently. But attention since it has a very long continental margin. In India was drawn towards it in the th century when alone, the natural gas present in methane hydrates EG Hammerschmidt, an engineer working in a is estimated to be about trillion cubic meters Texas-based natural gas company found that these times the currently known natural gas reserves hydrates were blocking natural gas pipelines in from other sources. winters.

| Ocean Engineering


Dr. Jitendra Sangwai is an Associate Professor in the Department of Ocean Engineering at IIT Madras. He received his PhD in Chemical Engineering from IIT Kanpur and is the founder of Gas Hydrate and Flow Assurance Laboratory at IIT Madras. He holds eight patents in the field of gas hydrate, enhanced oil recovery and flow assurance. His research interest lies mainly in the field of gas hydrates, enhanced oil recovery, rheology of drilling fluids, flow assurance, and polymer and nanotechnology applications for upstream oil and gas engineering.

“Natural gas hydrates o fer a realistic solution compared to other polluting fossil fuels,” says Prof. Sangwai. Methane, which is present in the cage-like structures of methane hydrates, can be exchanged with the greenhouse gas CO2 produced by burning the methane. This is kind of a zero carbon energy scheme. This helps in CO2 sequestration – capturing CO2 and burying it back in the earth as part of the hydrate. This method of fixing CO2 has two advantages. One is cleaning up the CO2 that has been emitted and the other is that it is unlikely that the CO2 stored in the form of hydrates will come back to the Earth’s surface.

of flue gas separation. The drilling environment is a dirty place, emanating many dangerous flue gases like oxides of carbon, nitrogen and sulphur. Gas hydrates can save us here. If the pressure and temperature at which flue gases form hydrates are known, then by sending in water at that precise temperature and pressure, one can trap the gases in the cage-like structures of the hydrates.

In India alone, the natural gas present in methane hydrates is estimated to be about trillion cubic meters times the currently known natural gas reserves from other sources. Gas hydrates o fer an alternative to the high expenses of transporting and storing Liquefied Natural Gas (LNG). LNG infrastructure o ten adds to the cost of importing as it needs a special type of floating tanker and heavy refrigeration facilities whereas gas hydrates need very minimal storage space – cubic meter of methane hydrate can store cubic meters of methane. Transporting hydrates is quite simple – it can be done using existing pipelines, with the hydrates in the form of slurries.

A High Pressure reactor used for gas hydrate studies at Gas Hydrates Flow and Assurance Lab

While drilling, CO2 is injected into the well for recovering the oil trapped in the tiny pores of rocks. The injected CO2 pumps out the trapped oil and, in the process, can also be consigned to the cage of the hydrates. This idea led researchers to the concept

Another potential application of gas hydrates is employing them in desalination. Here, seawater is taken in a chamber and CO2 gas is passed it at very low temperature and high pressure resulting in the formation of CO2 hydrates. By taking the formed CO2 hydrates into another chamber and dissociating them by increasing the temperature and decreasing the pressure, pure water that is free from salts can be obtained. This happens because salts cannot form hydrates. Desalination using

| Immerse


Dr. Jitendra Sangwai and his research team at IIT Madras

this technique is considerably cheaper than other conventional techniques. The problem is that little is known about the stability of the structure of the cages which form at di ferent combinations of pressures and temperatures. The environment where hydrates are found is very harsh. It is not possible to obtain % pure methane hydrates. Seawater already contains many dissolved salts that inhibit the formation of the gas hydrates by taking away the required water. E fects of these salts on gas-hydrate formation are not completely understood. Further, the impact of di ferent types of porous medium in the ocean like silica gel, silica sand, activated carbon on the formation of hydrates are to be studied. Apart from understanding what promotes the formation of hydrates, the study of substances which dissociate these hydrates are quite important. Gas hydrates are notorious for flow path blockage of pipelines in the oil and gas industry, which is how they were initially noticed. “Both promoting and disassociating of hydrates are to be mastered,� says

Dr. Sangwai. Dr. Sangwai and his team have been working with a variety of additives to improve the stability of gas hydrates at di ferent temperatures and pressures. Rather than conducting experiments piecewise, which is expensive and time-consuming, they are developing a model to predict their behavior.

Dr. Sangwai’s aim now is to develop new kind of additives that will reduce the cost of extracting methane from methane hydrates. Additives are of two types. They can either promote or inhibit the gas hydrates formation. Inhibitors help in freeing the pipelines, and promoters help in trapping gases in their cages. Additives can be classified on how they work with hydrates. Certain additives that tweak the temperature and pressure conditions at which hydrates form or dissociate are known as

| Ocean Engineering


thermodynamic additives. Other additives that do not a fect the temperature and pressure conditions but still a fect the formation of these hydrates and are known as kinematic additives. New hybrid additives are emerging which serve the purpose of both thermodynamic and kinematic additives. Safety is paramount while extracting methane hydrates. During the drilling operation of hydrates, if huge amounts of methane gas are released suddenly from the drilling site, an explosion under water is highly probable. To prevent this, a cheaper highly volatile methanol is used as drilling fluid. This acts as a thermodynamic inhibitor. Prof. Sangwai and his team are looking for options to replace the highly volatile methanol with polyethylene glycol. If gas hydrates have so many advantages, then why aren’t we seeing natural gas extracted from methane hydrates in the mainstream industry? The main problem is that the transportation of large amounts of water from the recovery site to the extraction site is a costly a fair. Moreover,

formation of gas hydrates is a very slow process which forms a bottleneck in the supply chain. The extra transportation costs coupled with the kinetically slow process has proven to be one of the deterrents in adoption of this technique by the industry, which is why LNG still dictates the natural gas industry. Dr. Sangwai states cheerfully, “This is where we come into the picture.” Gas hydrates can be made competitive by minimising the transportation between recovery site and extraction site. Dr. Sangwai’s aim now is to develop new kind of additives that will reduce the cost of extracting methane from methane hydrates. When asked about how he decided to work in the field of gas hydrates, Dr. Sangwai says “Right a ter my PhD, I wanted to work in the area which will be the future of oil and gas industry. I believe gas hydrates will be our future energy sources.” With commercial production already started in Japan, though at a slow pace and small scale, we can expect the production of natural gas from gas hydrates soon to begin in India. ⌅

Meet the Author Nikhil Mulinti is a final year Dual Degree (B.Tech. - M.Tech.) student in the Department of Ocean Engineering at IIT Madras. He is fond of science and his fascinations range from the cosmology to anthropology. He is currently working on the bubbly flow technology, a trending research area in marine hydrodynamics. For any comments or criticism, the easiest way is to drop a mail on nikhil.mulinti@gmail.com.

| Immerse



S

uppose one routine day you wake up to find another living being who is an identical copy of your mirror image. That is, this mysterious creature behaves and moves exactly the way you do but for his body being a laterally inverted version of yours. However, if you both bump into each other, you get killed instantaneously! Bizarre, isn’t it?

of matter particles) are particles called positrons. Equivalently, for every class of electrically neutral matter particles, there exists a class of electrically neutral antiparticles having the same mass. Thus, for every entity of matter we are familiar with, there exists a corresponding ‘antimatter’ entity. Several experiments that ensued from Dirac’s postulates proved the existence of such antiparticles. It was also learnt that when a particle collides with its antiparticle, both of them get annihilated and release energy in the process, regardless of whether they are charged or neutral. Thanks to years of toil of physicists all over the world, today we can understand and appreciate antimatter and its relationship with matter much better. But there is one thing we still do not comprehend: If matter and antimatter are exactly equal and opposite, why does the universe contain much more matter than antimatter? Many explanations have been proposed so far, but none of them is fully convincing.

Courtesy: CERN

While we currently know nothing that even remotely hints at the existence of such alien beings, we are aware of a similar phenomenon in the realm of particles, owing to the Nobel Prize winning inferences of the renowned physicist Paul Dirac and several other studies that were founded upon his seminal work. Interestingly, Dirac combined Einstein’s special relativity and the theory of Quantum Physics into an equation that yielded two solutions – one associated with positive energy and another with ‘negative energy’. He conjectured that for every class of charged particles, there exists a class of ‘antiparticles’ – particles with the same mass but opposite charge. For example, the antiparticles corresponding to electrons (a class

If matter and antimatter are exactly equal and opposite, why does the universe contain much more matter than antimatter? Nevertheless, we do know that certain conditions called Sakharov conditions must be satisfied for there to be an imbalance between matter and antimatter. One such fascinating condition is the existence of a phenomenon called Charge Parity Violation (CP Violation) during the first few seconds following the Big Bang. It turns out that the most enduring theory of particle physics, known as ‘The Standard Model’, provides some explanation for CP Violation.

Dr. Jim Libby is an experimental particle physicist working at IITM since 2009. Dr. Libby received his undergraduate and postgraduate degrees from the University of Oxford. His PhD work was with the DELPHI experiment at the Large Electron-Positron Collider at CERN. Since completing his PhD in 1999, he has worked with accelerator experiments at CERN, Stanford, Cornell and KEK (Japan). He also participates in studies related to the Indian-based Neutrino Observatory (INO).

| Immerse


However, whether the Standard Model describes CP Violation correctly or not is not known with certainty. This is currently an area of active research and Dr. Libby has been engrossed in it for the past seven years at IIT Madras. Before we move on to Dr. Libby’s specific interests within this area, let us see what the Standard Model itself has to o fer. Developed throughout the latter half of the th century, this exhaustive theory seeks to explain the characteristics of the vast multitude of subatomic particles and their complex interactions with the help of only three kinds of particles – six quarks, six leptons and force carrier particles.

electric charge. The third category of particles in the Standard Model is that of force carrier particles which give rise to three fundamental forces: the strong force, the weak force and the electromagnetic force. The strong force holds quarks together, the weak force causes the decay of massive quarks and leptons into heavier quarks and leptons, and the electromagnetic force causes electrically charged particles to repel or attract each other.

Every conservation law is associated with a symmetry inherent in nature. These forces in turn give rise to three kinds of interactions: the strong, the weak and the electromagnetic interactions. These interactions include particle decays and annihilations and can be represented pretty much like chemical reactions characterised by reactants and products. Furthermore, their most vital properties are described by quantum mechanics. Two such properties are the familiar electric charge (denoted by C) and parity (denoted by P ).

Courtesy: DESY

Quarks are constituents of the familiar protons and neutrons that make up most of the matter that we see around us. They do not have an independent existence; they exist only with other quarks in composite particles (particles composed of other particles). Scientists are currently aware of three pairs of quarks: the up-down, the charm-strange and the top-bottom pairs. Among these, up, charm and top quarks carry positive electric charge whereas the other three carry negative electric charge. On a lighter note, the characteristics of these quarks are as weird as their names. Leptons are solitary matter particles and thus have an independent existence. The six kinds of leptons are electrons, muons, tau particles and three kinds of neutrinos. All leptons except neutrinos carry

Before asking what ‘parity’ means, it would be worth recalling two laws of Physics that you studied in high school: the law of conservation of linear momentum and the law of conservation of angular momentum. If you wrack your brain long enough, and if you have the genius of the great German mathematician Emmy Noether, you might gain one of the most precious insights into physics: Every conservation law is associated with a symmetry inherent in nature. For example, linear momentum is conserved because of spatial symmetry – there is no particular location in space that is preferred to other locations. In the case of angular momentum, there is no preferred direction in space. Likewise, the initial assumption that nature was unbiased and thus treated matter and antimatter identically, or that there was a symmetry between antimatter and matter, reinforced the classical hypothesis that the

| Physics


total parity of a system is always conserved in a particle interaction. In essence, this means that an interaction and its mirror image can be represented by the same particle equation. However, as is the fate of most scientific theories that for a long time enjoy an unquestionable presence as truths even in the most critical of minds, the theory of parity conservation was proved to be incorrect. Physicists learnt through reluctantly performed experiments that parity is not conserved in weak interactions. Usually, when a promising hypothesis is refuted by experimental evidence, scientists understandably try to modify it or extend it to a more general case rather than discarding it altogether. That is exactly what happened in this case too – physicists tried to find another quantity Q such that the combination QP of this quantity and parity would remain symmetric even in weak interactions. The renowned physicist Lev Landau proposed that in this case Q is nothing but C, the charge. Parity conservation thus came to be replaced by CP Symmetry. This meant that a process in which all the particles are exchanged with their antiparticles was assumed to be equivalent to its mirror image.

How should the Standard Model account for CP Violation? Although CP Symmetry succeeded to an extent in explaining weak decays, history was destined to repeat – this extended notion of symmetry too was found to be violated in decays of particles called neutral kaons. It was observed that even the combination of charge-parity was not conserved in these decays which again happened to be weak interactions. This phenomenon came to be known as CP Violation. To comprehend this more clearly, consider the decay B ! DK . We now perform a CP operation on this decay, i.e,. we laterally invert the decay in the -dimensional space (or convert it into its mirror image) and then invert the signs of the charges of the particles involved. We thus arrive at the decay B + !

DK + , which is a charge-conjugated version of the mirror image of the original decay. Experiments have shown that the rates at which these two decays occur di fer by a remarkable amount. In other words, the total amount of CP on the reactant’s side of the combination of the two decays B ± ! DK ± doesn’t equal its total amount on the products’ side. This is how charge-parity conservation (or CP symmetry) is violated in this decay. The overall significance of this symmetry violation for particle physics can be gleaned from the fact that its discoverers, Cronin and Fitch, were awarded the Nobel Prize in . At this juncture, the following question arises: How should the Standard Model account for CP Violation? The answer lies in the properties of the Cabibbo-Kobayashi-Maskawa matrix (CKM matrix), a square array of numbers that is central to the Standard Model. In order to get an idea about this matrix, we must first note that there are weak decays in which negatively charged quarks (bottom, down and strange) get converted into positively charged ones (top, up and charm). This gives rise to possible decays. The CKM matrix, having rows and columns, contains information on the strengths of each of these decays. CP Violation is incorporated into the Standard Model by allowing this matrix to have complex number entries. However, there is a crucial constraint: for the CKM matrix to make physical sense, the Standard Model requires it to have a mathematical property called unitarity. This property can be expressed as a set of equations which the matrix must satisfy. “These equations involve variables!”, says Prashanth, a student of Dr. Libby, as he laughs at the sheer complexity of the whole thing. Fortunately, the unravelling of a few relationships between the variables reduced their number by an alarming di ference - from eighteen to just four! These parameters are comprised of Euler angles and phase variable. The unitarity of the CKM matrix now reduces to fewer constraints, of which one states that the Euler angles should be the angles of a triangle, i.e., they must sum up to degrees. The triangle

| Immerse


formed by them is called the unitarity triangle. As can be seen from the figure below, its angles are denoted by ↵, and . Although the phase variable is the one responsible for CP Violation, the area of the unitarity triangle indicates the degree of violation. This triangle is unique in terms of its lengths and angles. It therefore has the potential to indicate how accurately the Standard Model describes this symmetry violation.

The Unitarity Triangle (An Illustrative Image)

Many physicists thus shi ted their focus to the determination of the values of ↵, and , the Euler angles or the angles of the unitarity triangle. They have been able to determine ↵ and with a reasonable accuracy by studying the interference between various decays involving particles called mesons, but remains elusive. You may naturally ask, “Why can’t they determine solely from their knowledge of the other two angles simply because the three angles form a triangle?” Note that such a method of determining would rest on the unitarity assumption, the constraint on the CKM matrix that in fact gave rise to the three Euler angles. In order to test this assumption, it is necessary to determine from other measurements. Moreover, “the current world average precision on is significantly worse than that of the other angles of the unitarity triangle”, says a paper recently authored by Dr. Libby. So the next question is: how must one go about enhancing this precision? This is where Dr. Libby’s research enters the picture. His aim so far has been to improve our knowledge of by studying how certain observable quantities violate CP symmetry in B ± ! DK ± decays. These quantities are called CP-violating observables. Dr. Libby’s focus has been on a special

class of these observables – observables associated with particle states called CP eigenstates. These states are the products of certain kinds of B ± ! DK ± decays. Two kinds of CP eigenstates have been of greater interest: CP-even eigenstates and CPodd eigenstates. Another way of saying that a CP eigenstate is CP-even is: ⌘ = +1. Similarly, ⌘ = 1 implies that the state is CP-odd. However, this does not mean that no e forts were made in this direction previously in order to know more about the Euler angle. Four kinds of B ± ! DK ± decays had already been studied. Essentially, each of these is a decay of a D meson to a unique CP eigenstate. Furthermore, in each decay, only a fraction of D mesons decays to the desired eigenstate. This fraction is known as the branching ratio (BR) of the decay. With this background in mind, we can represent the end results of these four decays through the table below: State

Branching Ratio (%)

⇡+⇡

+1

.

K +K

+1

.

⇡0

1

.

KS ⌘

1

.

KS

As can be seen from this table, only a very minute fraction of the D mesons decays to a given CP eigenstate. Measurements on these states are thus limited because not many samples are available for study. So the challenge facing Dr. Libby and his team of students was to identify more easily available eigenstates to which these mesons decay. And they did. They used the following critical observation made in a separate study: the branching ratio for the decay D0 ! ⇡ + ⇡ ⇡ 0 is 1.43%, a fraction significantly greater than the ones listed in the table above. This implied that the state ⇡ + ⇡ ⇡ 0 was certainly more useful than the four states studied earlier. But it was not known whether it was a CP-even or a CP-odd eigenstate. As it turns out, the answer to this question facilitated a more precise determination of .

| Physics

The data pertaining to the CP observables


associated with the decays described above as well as several other decays has been collected by a giant particle detector called CLEO-c. “CLEO is the granddaddy of flavour physics, with a history of achievement dating back over years”, says Dr. Libby. This machine of monumental importance was designed way back in to collide electrons with their antiparticles called positrons at an energy of approximately GeV, an amount su ficient to accelerate , , , electrons through a potential di ference of one volt. Of particular interest is the Cornell Electron-positron Storage Ring (CESR), the part of CLEO where these collisions take place. It has a circumference of metres and is located metres below the ground level! Since its initial construction, CLEO has been upgraded several times for various purposes. Its final version, CLEO-c, has been tailored to the study of charm quarks such as D mesons. CLEO-c has so far collected million pairs of D mesons.

Dr. Libby and his team thus set out to analyse the data gathered by CESR that contains information on the CP-content of the previously mentioned D decay. You may have correctly guessed that the data concerned was extremely vast – so vast that even a ter the physicists concerned analysed all the data using the methods they had planned to employ, they felt compelled to use an altogether di ferent class of methods just to validate their analysis. This class of methods, known as the Monte Carlo methods, involves generating random numbers and performing repeated simulations on the acquired data using these random numbers as inputs for the simulations.

“CLEO is the granddaddy of flavour physics, with history of achievement dating back over

CESR Quadrapole Magnet. Courtesy: CLASSE, Cornell University

| Immerse

years.


The next part of the analysis was to determine

that in the worldwide e forts to unravel the most

was CP-even or CP-odd.

intriguing mysteries of matter and antimatter, Dr.

Results indicated that the state was in fact in

Libby and his team, and hence IIT Madras, have

between these two extremes; a quantity called CP

taken a step forward.

whether the state

⇡+⇡

⇡0

fraction (denoted by F + ) that was determined for D ! ⇡ + ⇡ ⇡ 0 revealed that it was almost a pure CP-even eigenstate (with the “purity” being close to . %). This key observation was examined further in order to understand its various implications. As far as

was concerned, Dr. Libby and his team

showed that further investigation of the decay mode D ! ⇡ + ⇡ ⇡ 0 could enhance the precision on .

However, the story is not over yet.

“The

formalism needs to be adjusted to incorporate F + to account for the small CP-odd component in the final state”, as per a recent paper of Dr. Libby’s. Another massive particle collider named Beijing Spectrometer III could supply the data necessary for this purpose. Analysing this data would then contribute to our knowledge of and push us closer

What is more, they were also able to propose an exact

to answering the larger questions involving matter

analytical method for the determination of this Euler

and antimatter. Undoubtedly, this means that there

angle. It is just a matter of time before this method

is a long way to go and that innumerable exciting

is implemented in the near future. So it is safe to say

discoveries are in store for us. ⌅

Meet the Author Rohit Parasnis is a final year Dual Degree student pursuing his B.Tech. in Electrical Engineering and M.Tech. in Biomedical Engineering at IIT Madras. One of his long-term goals is to make science more interesting and more accessible to all. Some of his past endeavours include generation of video content for familiarising school students with experimental science, translation of scientific and social scientific Wikipedia articles into regional languages and performing a managerial role for the National Service Scheme (NSS) at IIT Madras. He can be reached at rohityparasnis@gmail.com.

| Physics



Could you briefly tell us about the history of NPTEL? NPTEL was started in with the initial aim of creating course material for college students and making it available to everyone on the internet. The first phase of the project extended from to , when we created archived courses; video courses and textual courses. The second phase, which is almost nearing its end now, had a target of courses, but we were successful at raising it to nearly . The Ministry of Education has now sanctioned the third phase, which focuses on Massive Open Online Courses, or MOOCs. The online courses and the subsequent E-Certification programme is di ferent from the archived course material on NPTEL in a way that they follow the pace of a regular classroom instead of the enrolee’s pace.

The long term aim is to increase the contribution of other institutes towards these courses and to provide a wider range of courses. Regarding the MOOCs, when did they start and how has it progressed since then? We started with course in March , in September and by the time we reached January we had started courses. Then in July was where we saw the real expansion in courses, we started courses by then. In January , we will reach and by the time we enter March, we should be at courses. We are expecting to reach a steady state where a certain set of courses will be o fered in the odd semester and a certain set in the even semester. The long term aim is to increase the contribution of other institutes towards these courses and to provide a wider range of courses. How does the e-certification programme via online courses work? Mainly, we o fer formats of online courses; hours courses of weeks duration, hours courses of weeks duration and - hour courses, which

Prof. Andrew Thangaraj, Coordinator, NPTEL

extend over nearly a semester. All the courses being o fered in a phase will have the same starting date and students who are interested will have to enroll for it beforehand. Every week, parts of the course content and related assignments will be uploaded on the portal. The assignments will have a due date and the students have to submit them to get them evaluated, which makes up a part of their final scores. There is also a discussion forum for every course, where students can get their doubts clarified by the teaching assistants in charge. At the end of the course, the student will have to attend an o fline proctored exam in their designated centres. Currently, we have at least one regular examination centre in all the states. If there is a substantial number of students from a region which does not have a centre, new centres are set up for their convenience. Overall, we have nearly examination centres all over India. The final certificates will be prepared with marks scored in the assignments and the final exam. What is the role played by the Teaching Assistants (TA) at NPTEL? Students, or rather teaching assistants play a very vital role in the smooth functioning of our courses. Many MS, PhD and even final year B. Tech. and Dual Degree students are a part of this programme as TAs. Depending on the length and degree of di ficulty of the course we have the number of TAs ranging from

| NPTEL


The work that goes on behind the scenes for NPTEL.

for certain courses to even

for some.

They learn about the portal and upload material on it. They are a part of the discussion forum to answer the various questions raised about the courses. They also help in the correction of the exam papers. The TAs have also helped out students with their preparation for exams such GATE. They have mapped out the answers to the questions asked in GATE in the last years over disciplines. Overall we have had excellent interaction with the TAs. How is NPTEL di ferent from other MOOC platforms like MIT Open Courseware and Coursera? The largest faction of our viewership consists of Indians, apart from people from USA, Pakistan, Africa, etc. We prepare our courses so as to suit the requirements of the Indian educational system more and to make our audience more comfortable with the learning process. Also, we conduct o fline proctored exams at the end of our online courses. The certificates that students receive in this way has a greater value attached to them since they serve as proof of their own achievement. Could you briefly tell us about the outreach programmes and workshops carried out by NPTEL? We have held so many workshops in recent weeks that we are starting to lose count! We have started this e fort of creating local chapters in colleges. As a

part of this programme NPTEL has a representative in each of these colleges. He provides the necessary information regarding the feedback on the courses, the work that must be carried out, how to improve the course and also on the various courses required by the students. We have coordinated workshops all over the country. We have gone to Kerala, Andhra Pradesh, Karnataka to name a few places. We have even gone to tribal areas in various states such as Orissa. At these places we go and describe what NPTEL is, what a local chapter is, what online courses are and how they might benefit from them. We have organized more than local chapters till now and get requests for more each day, with requests for local chapters even coming from countries like Ecuador.

The certificates that students receive in this way has a greater value attached to them since they serve as proof of their own achievement. As the coordinating institute for NPTEL, what role has IIT Madras been playing in its functioning and development? IIT Madras has been one of the primary institutes

| Immerse


in laying the foundation of NPTEL. It has provided financial support for this programme right from its launch in . It has also helped in the proper distribution and e ficient use of this money. Many components of the courses have been initiated at IIT Madras. Online courses, transcription and even the subtitling of these courses have seen their beginnings at IITM.

involved in NPTEL. This amount which was meant for years in fact lasted for years and was used very e ficiently among the institutes. As a part of the next phase which begins in , the Project Advisory Board has sanctioned | crores. This money is meant for the next years and is to be used for the creation of more online courses.

In the future we want to be a body which keeps o fering courses online that students can take from anywhere and use for credit or even for employment.

The classroom where NPTEL lectures are recorded at IIT Madras

What is the budget of NPTEL and how is it distributed in the phases? The budget that we were provided for Phase was | crores for five years. This amount was meant for the creation of courses. Since there was a lot of money le t even a ter the creation of courses, we used the remainder for setting up of studios and for hiring more sta f members in the institutes

What is your future vision for NPTEL? We are looking towards creating a virtual technical university. We have laid out a clear plan for this purpose and are working towards it. We are also looking to initiate a credit based curriculum for all universities in India and this will hopefully become a way through which students can earn credits wherever they are. In the future we want to be a body which keeps o fering courses online that students can take from anywhere and use for credit or even for employment. ⌅

Aryendra and Aslamah are second year undergraduate students at IIT Madras. They are correspondents for The Fifth Estate.

| NPTEL


Cover Images Credits Front Cover

Source: CC-BY-SA Erik Scott

Design: Vishal Upendran

Electrical Engineering

Contents Design: E Amritha & Sree Ram Sai

Computer Science and Engineering TM Krishna performing in a concert Design: E Amritha Source: S Hariharan

Testing the DISANET system at IIT Madras Design: Sree Ram Sai Source: Wikimedia Commons

Engineering Design A breast cancer cell Design: Sree Ram Sai Source: Wikimedia Commons

Aerospace Engineering

Humanities and Social Sciences

A jetpack Design: Vishal Upendran Source: Wikimedia Commons

A primary health care center in Karnataka Design: Raghavi & Swetamber Source: Wikimedia Commons

Applied Mechanics

Management Studies

A set-up for precision glass moulding Design: Rohit Parasnis & Kiranmayi Malapaka Source: IPT Fraunhofer Griechenland

Design: Kiranmayi & Swetamber Source: Shutterstock

Biotechnology

Mathematics

A water color painting of DNA Design: E Amritha Source: Caitie Magraw Art

Design: E Amritha Source: YouTube

Chemical Engineering

Orthopaedics: On-bone setting Design: Vishal Upendran Source: Wikimedia Commons

Mechanical Engineering

A type of algae Design: Swetamber & Kiranmayi Source: Wikimedia Commons

Metallurgical Engineering

Chemistry Structure of dendrimers Design: E Amritha Source: “Dendrimers Market�, NANOTECHMAG Issue ( )

Civil Engineering Kedarnath temple Design: E Amritha Source: Debdutta Purkayastha via Blogger

Center For Innovation A concept image of an AUV Design: Sree Ram Sai

Friction welding Design: E Amritha Source: TWI via Flickr

Ocean Engineering A gas hydrate block embedded in the sediment of hydrate ridge, o f Oregon, USA Design: Vishal Upendran Source: Wikimedia Commons CC-BY-SA .

Physics Collision of matter and antimatter Design: Vishal Upendran Source: Newscom

| Cover Image Credits


| Immerse


Thanks for Reading Readers of Immerse will include faculty members and students at other IITs, IISERs and NITs, where we will send copies, just like we did last time. Apart from the online audience, of course. If you are a student at IIT Madras and would like your research project to be featured, we would be delighted to hear from you. Email us at immerse.iitm@gmail.com. If you found this issue exciting and would like to contribute next time, be it as an editor, writer, photographer or graphic designer, please get in touch with us at the same email address. We will let you know how you can contribute. In any case, if you have anything to say, be it praise or criticism, we welcome your views. Let us know by filling this feedback form goo.gl/BCSdkf, also accessible via this QR code.

t eiitm.org/immerse

| Immerse




Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.