Version 4.0 - 2024
Inside Look at Ethical AI Exerting Cybersecurity Force Models and Risk Evasion
Emmanuel Klu (CS ’13)
Letter from the Dean
College of Computing Magazine Dean, College of Computing Lance Fortnow Vice President for Enrollment Management and Student Affairs Mallik Sundharam Associate Vice President for Marketing and Communications Chelsea Kalberloh Jackson Director of Content Andrew Wyder Editor Casey Moffitt
Dear All, Artificial intelligence took center stage at the 2024 Nobel Prize announcements, with half of the Nobel Prize in Chemistry going to the creators of the protein-folding prediction algorithm AlphaFold, perhaps the biggest triumph of modern AI. The Nobel Prize in Physics was awarded to Geoffrey Hinton and John Hopfield for the development of neural networks that are behind the modern AI algorithms, including generative AI and AlphaFold. There is no better recognition of the role that computing and AI is playing in the world. Hinton is the second person to have won both a Nobel Prize and the Association for Computing Machinery’s A. M. Turing Award, the highest honor in computing. I’d like to take this opportunity to remember the first, Herbert Simon, a political science professor and chair at Illinois Tech from 1942 to 1949. During that time, Simon developed the concept of bounded rationality—the notion that people don’t always make perfectly rational choices because of limited information and cognitive ability. After his tenure here, he moved to Pittsburgh and helped found the computer science department at the Carnegie Institute of Technology—now the School of Computer Science at Carnegie Mellon University—a model for our own college. He received the 1978 Nobel Prize in Economics for his work in bounded rationality and the Turing award in 1975 for applying these ideas to early AI systems. This will be my final dean’s letter in this publication. It has been my pleasure to serve all of you as the inaugural leader of what is now the biggest college on campus, and I thank you for your support and role as ambassadors of computing at Illinois Tech.
Lance Fortnow Dean College of Computing lfortnow@iit.edu
Design and Illustration Joe Goforth Photography Jamie Ceaser Michael Reiter Bonnie Robinson Copyeditor Casey Halas
College of Computing Magazine is published annually by the Office of Marketing and Communications and the College of Computing. Send Letters to College of Computing Magazine Office of Marketing and Communications 10 West 35th Street, 13th Floor Chicago, IL 60616
Mission Statement Provide the students and faculty of Illinois Tech from all backgrounds and disciplines the best-in-class computational and data science platform to excel in their respective fields.
ADA Statement Illinois Institute of Technology provides qualified individuals with disabilities reasonable accommodations to participate in university activities, programs, and services. Such individuals with disabilities requiring an accommodation should call the activity, program, or service director. For further information about Illinois Tech’s resources, contact the Illinois Tech Center for Disability Resources at disabilities@iit.edu.
On the Cover The drawing of Emmanuel Klu (CS ’13) is placed on an artificial intelligence-generated background image that came from prompting AI to visualize what ethical AI would look like as an abstract watercolor painting. Illustration: Joe Goforth
Version 4.0
contents 2
5
8
10
Faculty News
College of Computing faculty continue to drive innovation through cutting-edge research and through interactions with government and industry partners, in addition to receiving recognition for their efforts.
Trailblazing an Ethical AI Landscape
Emmanuel Klu (CS ’13) outlines the elements of ethical and responsible AI in his role at tech giant Google.
12
Mastering Market Meldowns
Ismail Iyigunler (Ph.D. AMAT ’12) is implementing models to reduce financial risks at Bank of America.
Student News
College of Computing students have created innovative tech-driven solutions, conducted real-world-driven research, and competed in national competitions over the last year.
High Stakes Cybersecurity
Harshini Chellasamy (ITM/M.A.S. CYF ’20) is delivering the solutions for the lofty cybersecurity standards that executives at Fortune 500 companies demand.
FACULTY NEWS Image generated by Adobe Firefly
Tackling Machine Learning Vulnerabilities Binghui Wang, assistant professor of computer science, has dedicated much of his research to building trustworthy machine learning models, and he says that earning the prestigious CAREER Award from the National Science Foundation validates his past work and the potential for future research. Wang says the objective of his CAREER award-backed project is to develop new methods that will make machine learning models, especially deep learning models that use deep neural networks, more secure against privacy and security attacks. It will also help these models become more trustworthy. The work will be structured around three thrusts. Thrust one will be to design novel information-theoretic representation learning methods against common privacy attacks, including membership inference, property inference, and data reconstruction attacks. Thrust two will be to design novel information-theoretic representation learning methods against common security attacks, including test-time evasion attacks, training-time poisoning attacks, and training- and test-time backdoor attacks. Thrust three will generalize thrust one and thrust two to handle diverse attack types, data types, and learning types. Existing defense methods often aren’t effective in real-world applications with strict confidentiality requirements, or they degrade the performance of the models. Many defenses are aimed at specific attack types rather than multiple concurrent attacks. Wang’s goal is to address these limitations by designing a trustworthy learning framework based on information theory. “Earning the NSF CAREER Award signifies that my past work has had, and proposed research will have, the potential to advance the field of trustworthy machine learning,” he says. “It can be a catalyst for further career development and offers increased visibility in this field.”
2
Addressing Cybersecurity, AI Issues in the South Pacific
Maurice Dawson, associate professor of information technology and management at Illinois Institute of Technology, was recently an invited guest of the United States Agency for International Development (USAID) in Fiji to share his cybersecurity and artificial intelligence expertise. Dawson spoke with government officials, students, and the media to address the unique cybersecurity needs that Fiji faces, in addition to USAID’s interests in the South Pacific. “Many sectors of Fiji are concerned with cybersecurity,” Dawson says. “The country has yet to adopt a high-level cybersecurity directive that will flow down to all levels of government, which includes giving businesses direction on how they should secure their networks.” Dawson provided his insight to help strengthen the country’s digital economy and to highlight new threats that need to be considered, including the emergence of AI and quantum computing. He also spoke with journalism students at the University of South Pacific about the risks and uses of AI. He discussed how AI tools can be used by journalists to help them, as well as how AI can be used to develop misinformation and fake news. He also discussed how to detect fake news. Dawson says the trip has resulted in a new research opportunity, which began this fall with a graduate student. “This project will investigate how cybersecurity policies in the Pacific, particularly Fiji, translate into practical implementation,” Dawson says. “We will engage with USAID to address real mission needs, mapping policy directives to low-level strategies for effective cyber defense.”
Building Bonds Through Fulbright Journey Gurram Gopal, chair of the Department of Information Technology and Management at Illinois Institute of Technology, recently spent two weeks visiting campuses of Universidad Privada Boliviana (UPB) across the Bolivian countryside, making academic and business connections between the Bolivian university and Illinois Tech as a Fulbright Specialist. It was Gopal’s third award from the Fulbright program, which is sponsored by the United States Department of State. He traveled to campuses in the Bolivian cities of Cochabamba,
La Paz, and Santa Cruz de la Sierra, where he gave lectures on the digital trends and digital transformation sweeping across the global economy and how Bolivian enterprises can take advantage of this massive change and investment. “There are great opportunities in business in Bolivia,” Gopal says. “Europe has been well-studied and established. China has been a big focus over the last 20 to 30 years. Over the next few years there are going to be a lot of opportunities in the developing economies in Central and
South America.” Gopal says the trip spurred talks to develop degree programs for students at UPB and Illinois Tech in business, architecture, engineering, entrepreneurship, and tech. “For Illinois Tech students, it will expand their knowledge on how to conduct business there and to discover the diverse range of investment opportunities present there,” Gopal says.
Forecasting Phase Changes A grant from the National Science Foundation will fund an Illinois Institute of Technology researcher to embark on a project that aims to create a unified mathematical framework to predict reactions of interacting fluids. Shuwang Li, professor of applied mathematics, was awarded $242,677 from the NSF to build a model that will forecast the phase changes of interacting fluids that are out of equilibrium. Predicting these reactions will have beneficial applications in biological, physical, chemical, and engineering systems. “Recent advances in modeling and computational methods by my group and others now make the modeling and computing feasible,” Li says. “I plan
to develop and apply state-of-the-art adaptive numerical methods to largescale computations, and perform analytical, numerical, and modeling studies of important constituent processes.” Li says results from this project will also be incorporated into cross-disciplinary courses at Illinois Tech for students who are working toward mathematics degrees, as well as for those working toward chemical and material engineering degrees. The research component of the grant also can be used to develop an undergraduate course on complex fluids as part of Illinois Tech’s Interprofessional Projects (IPRO) Program.
“It’s nice to be recognized by my colleagues, but this also gives visibility for the whole department and the university. It’s great recognition for the mathematics research that’s going on here at Illinois Tech.” – Chun Liu Department of Applied Mathematics chair, on his election as a fellow of the American Mathematical Society for his outstanding contributions to the creation, exposition, advancement, communication, and utilization of mathematics in the fields of partial differential equations and calculus of variations and their applications in complex fluids.
College of Computing Magazine | 2024
3
FACULTY NEWS
Professors Shine at International Gathering Professors of applied mathematics Tomasz Bielecki and Igor Cialenco were among four academics invited to present a series of lectures based on their research at the renowned Banach Center in the Institute of Mathematics of the Polish Academy of Sciences, as a part of the Simons Semester’s “Stochastic Modeling and Control.” “Being at the Banach Center as a student, I never thought that I would have the chance to lecture there,” Bielecki says. The pair spent more than a month in Poland, each giving a series of lectures on their research, as well as attending a conference and workshops. Exchanges there with fellow academics, industry leaders, and students push the boundaries of mathematics research and can lead to new research collaborations. “This is how science gets done,” Cialenco says. “Doing research is not just writing papers. The purest level of science is to interact with other researchers.”
Top: Tomasz Bielecki Bottom: Igor Cialenco
Developing New Social Behavior Models Mathematical models have been used to simulate and predict emergent behaviors in nature such as birds flocking, bees swarming, and fish milling in a school, but an Illinois Institute of Technology researcher is developing a new model to predict social emergent behaviors. Ming Zhong, assistant professor of applied mathematics, has earned a Ralph E. Powe Junior Faculty Enhancement Award from Oak Ridge Associated Universities to develop two distinct data-driven modeling methods that are designed for simulating and predicting emergent behaviors. The models will be designed to apply to social issues such as a market economy, opinion formation, and criminal justice. He is one of the 37 recipients who were chosen from 174 applicants. The objective is to offer mathematical insights into how modeling such behaviors can enhance the understanding of daily American life, while improving living standards and the efficiency of civil governance. “Clustering, flocking, and swarming are emerging patterns that are generated by locally interacting agents,” Zhong says. “These agents can be intelligent agents, such as humans. Social behaviors can demonstrate certain interesting patterns—for example, traffic flow, market economics, group think, or herd behavior. Consensus of political opinions can be thought of as a form of clustering.” Zhong says the research will result in new data-driven modeling techniques by a variational inverse problem to derive a more realistic model from using actual data obtained from the Chicago public data base.
4
Ming Zhong Photo: Bonnie Robinson
STUDENT NEWS
Unlocking AI’s Power in the Workplace Layla Shalabi (AI, M.A.S. ’24) has seen the application of artificial intelligence take leaps and bounds during her time at Illinois Institute of Technology, so much so that she has taken her AI skills into the world of accounting and financial consulting. “When I started at Illinois Tech, the technology was a black box—something farfetched—to the general population,” she says. “But now it’s ingrained in everything that we do.” She will utilize AI technologies to improve the performance and productivity of clients and internal personnel as a cloud and digital engineering consultant with PwC. “In my field, we’re seeing a great demand in all industries for AI experts to bridge the gap between the technology and the common user,” she says. “With my knowledge and experience, I can design purpose-specific, AI-powered tools for clients to automate a lot of foundational processes that would otherwise require a team of people to manage.” Shalabi says she realizes how people like her are important to PwC’s success. “We’ve seen certain AI models be ‘convinced’ of incorrect information,” Shalabi says. “I prefer to use it for low-stakes, menial tasks, to increase time efficiency without risking plagiarism or compromising data.”
Layla Shalabi
CyberHawks Flex at CyberForce Competition Illinois Institute of Technology students showcased their cybersecurity skills during the United States Department of Energy’s CyberForce competition. Three teams of students from the university’s CyberHawks student organization competed against 107 teams of students from across the country in St. Charles, Illinois. CyberForce challenges students to apply their skills and abilities in five critical cybersecurity categories to a real-world environment. “Through these events, we test our knowledge on rapidly establishing network defenses, responding to cyberattacks, and solving cybersecurity challenges,” says David Arnold (Ph.D. CPE 5th Year), CyberHawks president and captain of team IIT CyberHawks.
A total of 17 CyberHawks members composed the three teams: IIT CyberHawks, IIT Talons, and IIT Tailwind. They were tested in cybersecurity anomalies, maintaining availability of business functions, defenses, maintaining an employee website, and documenting their findings and organizational risks. “CyberForce competition is one of the best and toughest ones I have attended,” says Shreyas Kulkarni (M.A.S. CYBS 1st Year), IIT Tailwinds captain. “It helped me understand the gaps within me, where I need to focus, and start working.” Elizabeth Aquino (CS, M.A.S. CBYS 5th Year) says the competition gave her an opportunity to apply the theory that she has learned about in the classroom and provided opportunities to learn more
about cybersecurity careers. “Competitions allow students to meet other students from across the nation, as well as connect with professionals,” she says. “We are able to ask professionals about their career, the competition, or even themselves.”
[From left to right] David Arnold, Nicholas Quigley, Benjamin de Pater, Mohamed Trigui, and Joanna Findura of team IIT CyberHawks at the CyberForce competition.
College of Computing Magazine | 2024
5
STUDENT NEWS
Crime Prevention System Nets Grainger Prize Student researchers from VigilAI earned the top spot at the fourth annual Grainger Innovation Prize for their crime prevention system.
The VigilAI team took home the $15,000 top prize in the fourth annual Grainger Computing Innovation Prize at Illinois Institute of Technology on November 7, 2024. VigilAI created a real-time crime prevention system that integrates artificial intelligence and computer vision, enhancing surveillance responsiveness by identifying critical events from video feeds. Team members included Utkarsh Nanda, Utsav Pathak, and Brittany Shepherd. Second place was awarded to RiskWatch, which developed a wearable fall detection device for seniors. The device utilizes accelerometer, gyroscope, and heart rate data to alert caregivers instantly, and it analyzes fall patterns to help prevent future incidents. Team members Sam Karson, Nathan Cook, Deimantas Gilys, and Daniil Skvyrskyi received a $10,000 prize.
SmartTraffic AI earned the $5,000 third prize in the competition. SmartTraffic AI is an adaptive AI-based traffic management system that dynamically adjusts speed limits based on traffic, weather, and accident data; it aims to reduce congestion and accidents. Team members included Kaung Myat Naing, Nicholas Allison, and Myat Minn Thiha. The aim of the Grainger Prize competition is to build interdisciplinary teams of Illinois Tech students to exhibit their computing skills through big data, AI, and data science projects that have the potential to positively impact society. Teams were encouraged to tackle projects that explored “computing with data for social good.” The Grainger Computing Innovation Prize is supported by an endowed gift funded by The Grainger Foundation.
Students Qualify for Hackathon Championship Three Illinois Institute of Technology students formed a research team to develop an investment optimization tool, which qualified them for the international SAS Hackathon 2023 grand champion competition. Narges Hosseinzadeh (M.S. AMAT ’23), Kan Zhang (Ph.D. AMAT ’23), and Thi Truong (AMAT ’23) were joined by Olivia Martin, a data science student at Northwestern University, and Vasilios Farmak, a software and data engineer from Greece, to form StaSAStician. The team was assembled under
6
the guidance of Sou-Cheng Terrya Choi, research associate professor of applied mathematics at Illinois Tech. Using the open-source SAS Viya environment, the team developed a dashboard that allows insurance advisers to optimize an investment portfolio based on risk and environmental, social, and corporate governance (ESG) standards. The dashboard visualizes these variables for clients, allowing the clients to see how they affect the potential returns on their investments. The work resulted in the
team earning the top spot in the Insurance Division of the 2023 SAS Hackathon, qualifying the team for the overall top prize. The biggest challenge that the team faced was time. It was given one month to become familiar with the SAS Viya platform, ESG investing, learning how to develop a product, and then producing its pitch videos. “When it comes to strict deadlines such as hackathons, creative things and a unique way of thinking arises,” Farmak says. “Simple ideas don’t need too much time.”
“I see a very similar story with the arts, chess, music— they all come together in a very interesting way where you always have a lot of similar and analogous ideas of chaos versus order. Being very sporadic, very aggressive, perhaps having that chaotic nature versus a very ordered, very defensive position.” —Stanley Nicholson (MATH ’23) on conducting research
STUDENT NEWS
Breaking Biases Machine learning models have demonstrated powerful predictability capabilities, but they have also demonstrated bias against certain demographic groups. Canyu Chen (Ph.D. CS 4th Year) has joined a research group that is working to limit that bias and build public trust in machine learning models. “Fairness and privacy are two important aspects in trustworthy artificial intelligence,” Chen says. “I noticed that it is a critical but underexplored problem to study fair AI algorithms considering real-world privacy constraints.” Machine learning models are increasingly used in the health care field to diagnose disease, develop treatment plans, and model the spread of viruses. Attacking bias against specific ethnic groups, genders, or age will help mitigate existing health care disparities among minority groups. “More specifically, conventional bias mitigation algorithms are not applicable to real-world scenarios where privacy mechanisms such as local differential privacy are enforced,” Chen says. “We aim to design novel techniques to largely improve the fairness of machine learning models considering the privacy constraints in the real world.” The team studied a new solution for fair classification in a semi-private setting, where only a small number of clean attributes are available. It developed a novel framework, FairSP, that can achieve fair prediction under this semi-private setting. FairSP learns to correct noise-protected sensitive attributes by exploiting the limited clean sensitive attributes. Then, it jointly models the corrected and clean data in an adversarial way for debiasing and prediction. Initial analysis shows that the proposed model can ensure fairness under mild assumptions in the semi-private setting. Canyu Chen
Students Improve Programming Skills at National Competition A team of students from Illinois Institute of Technology expanded their skills and gained a greater understanding of the computing profession by traveling to the campus of the University of Central Florida in Orlando, Florida, to compete in the International Collegiate Programming Contest North America Championship
[From left to right] Sofia Yang, Avery Stubbings, and Mohamad Fares representing Illinois Tech at International Collegiate Programming Contest North America Championship and Programming Camp.
and Programming Camp. It was the first time Illinois Tech was represented at the North American championship. Mohamad Fares (CS 3rd Year), Avery Stubbings (ECE 4th Year), and Sofia Yang (ECE 4th Year) displayed exceptional programming skills in the Mid-America Region at Purdue University. “ICPC was a continuous learning experience, where I learned about the aspirations some of the contestants have, the work some of the coaches and professors do, and about what it is like to work at some of the most selective companies, such as Jane Street or Citadel,” Fares says During the competition, teams
of three students work on a single computer to solve a series of up to 11 programming problems for five hours using various languages, including Java, C, C++, Kotlin, and Python. The competition not only challenges the students’ programming skills, but also their problem-solving skills. “The coding camps at ICPC left the biggest impression on me,” Stubbings says. “They showed me that there is still so much more that I can learn to improve as a problem solver and coder.” The team was coached by Associate Professor of Computer Science Gruia Calinescu and Assistant Teaching Professor of Computer Science Farshad Ghanei.
College of Computing Magazine | 2024
7
FEATURE
TRAILBLAZING
in an Ethical AI Landscape The trials of building responsible, fair, and robust models
A
rtificial intelligence’s fine line between potential and peril put the powerful technology’s developers on slippery footing. It presents an unparalleled opportunity to solve some grand societal challenges, but missteps and a lack of care can perpetuate harm to significant segments of society. Emmanuel Klu (CS ’13) works on the front line of building AI systems as a responsible and society-centered AI research engineer at Google Research. He says that this work includes asking many questions, anticipating pitfalls, and developing systems that people can trust and rely on. “I spend a lot of my time working in two buckets: building AI systems responsibly and leveraging AI for social impact,” Klu says. “Our research is driven by solving problems that matter in society. For example, we work on language inclusion in AI models and leveraging AI to solve food insecurity.” The first step of building AI models is to deeply understand the problem that you intend to solve. It is critical to be familiar with the various factors that influence the behaviors or outcomes that you’re modeling and to engage stakeholders with the requisite expertise for the problem domain. This helps to frame the AI task appropriately, identify potential pitfalls, and minimize unintended outcomes. “Once the problem is correctly identified, the next important step is all about data,” Klu says. “The concept of data-centric AI comes to mind here: that AI will only be as good as its data. This means that to build models responsibly, we need to make sure the data being used supports that goal.” Determining what represents good data really depends on the task at hand, but the principles underpinning responsible AI are consistent. A few of these principles include privacy, fairness, and robustness. “We need to protect privacy by making sure data doesn’t include anything that can identify an individual,” Klu says.
8
“We also need to understand the bias present in the data and how that impacts fairness of outcomes. For robustness, we assess whether the data is representative enough to support all possible scenarios in a practical setting.” One project that Klu worked on aimed to reduce unintended bias in language models, specifically in toxicity detection. The goal was to ensure that toxicity results were not disproportionately impacting marginalized communities. The key to this project was the development of a rich taxonomy for identity and to build a repository of context associated with identity terms. By collaborating with affected communities, this collected knowledge was leveraged to evaluate language models and to understand performance across various identity groups. Quantifying the extent of the problem and intervening to address it required carefully crafted data. Leveraging AI for decision making is a tricky topic, especially as AI gets deployed in critical domains such as health care. One major challenge is understanding long term impacts of interventions—technological or otherwise—in society. Klu says that he spends a fair amount of his time on systems dynamics, a discipline commonly used in business and engineering operations, as an approach to better understanding and addressing complex societal problems. It allows him to explore possible feedback loops that technology creates. For example, how technology changes human behavior, the unexpected ways in which humans may use technology, and how that use will lead to the evolution of these models. “We need to better understand how human-computer interactions might evolve and how best to build today to mitigate harms that may take time to show up,” he says. This expands his research at Google beyond the technical aspects of building AI models and into the social sciences. “We call AI development a ‘sociotechnical’ process,” Klu says. “This naturally means it must be multidisciplinary.”
Illustration: Joe Goforth
“For example, beyond our technical approaches to evaluating a model, we also pull in a lot of people to try to use it, break it, or highlight issues,” he adds. This “red team” may include ethicists, social scientists, user experience specialists, and psychologists—professionals who don’t need to have technical experience—to help AI researchers develop models that are fair, robust, safe, and trustworthy. Klu says his introduction to AI research started at Illinois Tech—but he opted to temporarily leave the research field to join Google as a software engineer immediately after graduating in 2013.
He started at Google by building a platform for designing and managing big data pipelines for enterprises, and he later worked as a site reliability engineer in Google Cloud, where his focus shifted to systems modeling and reliability while he helped the platform expand its geographic footprint. But he wanted to use his skills to work on social issues and moved into Google’s AI division. “I found a research team working on solving systemic and societal problems,” he says. “Although I didn’t have much direct experience building AI, I knew systems, data, and scale. I was able to bring my systems thinking
into that context.” He continues, “I like to believe I found my way back to AI at the right time, as it started to experience its boom in society. My role allows me to channel my hopes for a more just, fair, equitable, and sustainable society into my work. I think appreciating and understanding the complexity of society is only the first step towards that. “I am excited to see how we can get to better understand AI and leverage it for impact.” •
College of Computing Magazine | 2024
9
FEATURE
High-Stakes Cybersecurity
How a cybersecurity consultant designs tailored solutions for top clients—while prioritizing privacy, ethics, and societal impact 10
Photo: Jamie Ceaser
A
rmed with extensive cybersecurity knowledge and Dawson, associate professor of information technology experience, Harshini Chellasamy (ITM/M.A.S. and management at Illinois Tech, and Annamaria CYF ’20) is well prepared to tackle the challenges Szakonyi, assistant professor of information systems of the profession—but she says that her current role at and cybersecurity at St. Louis University, at the Midwest Boston Consulting Group has been different. Association for Information Systems at Bradley University. “At BCG, the stakes are high, and we operate at high The trio’s paper, titled “Russia’s Strategies for standards of excellence,” she says. Leveraging AI Policies and Investments for Global As a senior cybersecurity consultant, Chellasamy finds Economic Competitiveness,” explores the role of artificial herself working side by side with the top executives of intelligence investment on economic standing, in addition Fortune 500 companies to build cybersecurity solutions. to evaluating Russia’s AI strategy and highlighting areas The work is challenging, specifically understanding the for improvement. The study analyzes Russian AI policies particular needs that clients have coming from a variety of and investments and integrates the findings with global industries including health care, finance, education, and trends to gauge Russia’s position in the global landscape. the public sector. The research reveals that while “The projects are motivating Russia has intensified efforts and enriching,” she notes. “Every to harness AI technologies, “Helping others is at the engagement requires a deep dive into substantial gaps persist in core of what we do. Our the business’s context. Understanding comparison to leading nations. Its work can positively impact findings highlight the importance our clients’ challenges, objectives, and culture allows us to deliver solutions of education in nurturing AI millions. It’s incredibly that are unique and highly impactful.” talent, balanced public-private fulfilling to know that I’m The issues that Chellasamy encounters investments in fostering innovation, contributing to a safer, fluctuate depending on what the client and global collaboration in growing needs. Some require assistance in more secure digital world.” technological advancement. structuring their cybersecurity teams, Chellasamy conducted the research —Harshini Chellasamy during a two-month break between others in designing robust cybersecurity architectures, and still others in scaling jobs, and she says she found the secure systems. Her role often involves experience to be rewarding. collaborating with other BCG consultants—each bringing “It’s always exciting to work with people who challenge specialized expertise across various domains. you,” she says. The solutions, Chellasamy says, must stand up to a The conference offered Chellasamy a unique opportunity rigorous review to satisfy the clients’ needs. to engage with leaders in tech. Meeting Jaimie Engstrom, “Our solutions go through rigorous review to ensure CIO of Caterpillar, Inc., she says, was particularly they meet client needs,” she says. “Every recommendation inspiring, as Engstrom’s presentation aligned closely with we provide is substantiated. Clients expect data-backed Chellasamy’s own interests in how technology can drive answers that prove our solutions will deliver results.” value in the corporate sphere. Understanding how a particular industry works, “I was really able to relate to what she was saying from discovering a client’s needs, gaining the perspective the corporate side,” Chellasamy says of a specific corporate culture, building solutions, and Chellasamy says she would like to explore AI and how it backing up these solutions with facts require a certain can be used in cybersecurity further as she moves along in amount of research. her professional career. Learning how to gather information and present And Chellasamy is excited to explore the intersection it successfully to others were skills that Chellasamy between AI and cybersecurity, and she has interest in says she learned while attending Illinois Institute of cloud security and data privacy. Technology, where she got her first taste of research with “Privacy is becoming increasingly important as new her professors. She was able to publish three papers with laws emerge in the United States, and globally, and faculty researchers before she graduated. organizations work to meet their criteria,” she says. However, the research demands of consulting differ This passion for privacy has roots in her undergraduate from academia. studies at Illinois Tech, where she was drawn to the idea “In academic research, I explored various topics of of a career that protects others. interest based on what I wanted to know more about,” she “Helping others is at the core of what we do,” she says. explains. “In consulting, our work is outcome-driven— “Our work can positively impact millions. It’s incredibly clients come with specific questions, and we provide fulfilling to know that I’m contributing to a safer, more concrete answers.” secure digital world.” • Chellasamy also had the opportunity to present research that she conducted with her former adviser Maurice
College of Computing Magazine | 2024
11
FEATURE
Mastering the skills to navigate crises by advancing the risk models that guide banking decisions
I
smail Iyigunler (Ph.D. AMAT ’12) first walked onto Illinois Institute of Technology’s campus as a first-year Ph.D. student in August 2008, excited to begin his studies and prepare for a career in the field of quantitative finance. A month later, the industry collapsed. Lehman Brothers, the nation’s fourth-largest investment bank at the time, filed for bankruptcy after 158 years of business as the company’s investments in subprime mortgages tanked. It triggered a decline in global markets, sending the entire industry into a tailspin. “We didn’t know too much about what was going to happen to financial markets. I had no idea what was going on in the field,” Iyigunler says. “But it was helpful. I got see everything first-hand as a student, rather than being in the field.
12
It helped me get familiar with some important topics.” He found himself incentivized to understand the elements that caused the collapse, why it happened, and to understand the new methods needed to recover, as well as how to identify the problems that caused the crisis and how to prevent another one. “When COVID-19 hit, we saw a similar situation,” Iyigunler says. “We saw a market meltdown in days and weeks, but we had a better idea on how to react to it.” As director of global markets risk analytics at Bank of America, Iyigunler develops and maintains mathematical models that outline the best-case and worst-case scenarios— and everything in between. These models allow bank officials to make valuable risk management decisions. “My main focus is to ensure that the models are adequate, stable, and fit for risk management,” he says. “The modeling is a complex statistical analysis. It’s the math-related aspect of risk management.” Whether a model is new or existing, it is put through numerous stress tests and demonstrations to determine how fit and robust it is. The model is fined-tuned for optimization based on its results and new market information. These models are subject to strict regulatory oversight and must comply with a variety of risk management and investment guidelines. Some require augmentation to meet new regulatory standards, while others need adjustments to accommodate evolving market conditions. Iyigunler says this means that he sometimes works with computer engineers, attorneys, traders, and risk managers to ensure that the models can be applied to shifting measures. Most importantly, Iyigunler is assigned to communicate how the model works so that those relying
on it understand how the results are determined. “Stakeholders need to understand, and be comfortable with, the computations and limitations of the models,” he says. “You must be able to defend your model. You must prove that the model is fit for purpose.” The models are typically used to assess the risks that a client can bring. They answer the questions such as, “What will happen should a client default on a loan?” and “What are the reverberations that it will bring across the bank?” It allows stakeholders to assess how much risk they are willing to take under various market conditions. “The models have forecast abilities,” Iyigunler says. “The models identify the black swan event that could put extreme pressure on a firm’s risk profile, helping them assess whether they are comfortable taking on that level of risk. In essence, these models assess two different areas: market risk and the counterparty risk. Market risk includes how markets rise or crash, and if a market collapses, what loss a firm can see. The counterparty risk examines a client’s potential default and how that default will impact a firm’s losses. “Certain defaults may occur during specific events,” Iyigunler explains. “We can then assess how these defaults correlate with the underlying asset and evaluate the potential impact on a firm’s losses.” Iyigunler began his career as a quantitative analyst at Intercontinental Exchange, a clearinghouse. There, he gained valuable experience in understanding risk exposure, market dynamics, and the intricacies of clearing complex financial instruments. It provided him with great exposure to a diverse set of transactions and how an assortment
“My main focus is to ensure that the models are adequate, stable, and fit for risk management. The modeling is a complex statistical analysis. It’s the math-related aspect of investment banking.” —Ismail Iyigunler of markets operates. “While many of my friends started their careers at banks after graduation, I chose a different path,” Iyigunler says. “They had the chance to develop deep expertise in specific areas, but working at the clearinghouse allowed me to gain a broader view of the industry, which felt like the right fit for me.” He began his risk management career mostly looking at the credit default swap market. He helped build a centralized risk management operation, examining systemic risk for the whole market. He moved to Bank of America two years ago after climbing the risk management ranks at Intercontinental Exchange for nine years—a career that he has built on the unique pathway that he chose early in his education journey. Iyigunler began studying mathematics after high school in his native country of Turkey. He calls it a “radical” decision, as many in his home country take on studies for a very specific career, rather than choosing a general area study. “I’ve long been fascinated by the intersection of mathematics and finance,” he states. “When I was applying to graduate programs, quantitative finance was an emerging, high-growth field. Its momentum, coupled with seeing many colleagues pursue similar roles, made it a compelling career choice.” •
College of Computing Magazine | 2024
13
Michael Paul Galvin Tower 10 West 35th Street, 14th Floor Chicago, IL 60616
Non-Profit Org. U.S. Postage PAID Illinois Institute of Technology
www.iit.edu/computing
Accelerate the Tech Revolution We’re at a pivotal moment. Our ability to collect and analyze massive amounts of data is changing our world. Now, more than ever, we need skilled and knowledgeable leaders who understand the fundamental importance of data science, machine learning, and cybersecurity—and everything in between. That’s the kind of impact leader we cultivate at the College of Computing. Our students and faculty are already hard at work on innovations that will change the digital landscape of tomorrow. That means preempting problems before they arise, prepping our communities for any challenges that may lie ahead, and creating the tools and technologies that will empower our future.
You can accelerate that work. Power the Difference. https://www.iit.edu/computing/about/giving