One student, two continents
CMU-Africa student Farida Eleshin (MSIT ’24) is concluding her master’s program with a final semester in Pittsburgh, where she’s working on several research projects in the CHIMPS Lab that focus on privacy and security.
Born in Ghana to a family who valued education, it’s no surprise that Farida Eleshin (MSIT ’24) strives to be a professional who makes information more accessible to users.
In fact, her interest in improving privacy and security policies comes from personal experience. During an exercise in one of her graduate-level courses, “Privacy Policy, Law, and Tech,” Farida learned something that surprised her.
“I was intrigued by the amount of data about myself that was out there that I didn't even know about,” she said. “Getting to learn about my digital footprint and being more considerate of the information I put online has really motivated me as a researcher.”
Getting to learn about my digital footprint and being more considerate of the information I put online have really motivated me as a researcher.
Farida Eleshin, student, MSIT ’24
The information technology curriculum has given Farida the space to explore the field of privacy and security from both an expert and a consumer perspective. Farida began her master’s education at Carnegie Mellon University Africa in 2022, and she’s spending her final semester before graduation in Pittsburgh as part of the College of Engineering’s global exchange program. She wanted to enroll in the program because of the interesting courses being offered, as well as the opportunity to do onsite work at the CHIMPS Lab (Computer Human Interaction: Mobility Privacy Security). The lab is led by CyLab’s Jason Hong, professor in the Human-Computer Interaction Institute.
“Coming to Pittsburgh, I wanted to do more research in trustworthy AI and security and privacy. It has been really amazing to meet the team in person and work closely with everyone,” Farida said.
Farida began working remotely as a research assistant in the CHIMPS Lab while she was based at the college’s Kigali, Rwanda, location during the fall semester. She continued her two projects in person once she arrived in the United States in January for the spring term.
One of these projects is focused on seed phrases, which have emerged as an alternative to passwords for securing crypto wallets. Unlike typical passwords that people choose for themselves, these phrases consist of twelve randomly generated words that users must submit to access their electronic funds. The idea is that passwords are easier to guess and therefore are more vulnerable to hacking. There are still things to untangle about implementing seed phrases as a go-to security measure. Most notably, if a seed phrase is forgotten, there are no alternative security questions or access codes to give users access to mobile money. Funds are essentially lost.
Farida and her research team are trying to understand how to make seed phrases as practical as possible. In their project’s current phase, the team has sent out surveys to ask users about how they store or back up their seed phrases. With this feedback, the research team hopes to develop secure options for managing such sensitive information.
“We are currently studying people’s security behaviors and their perceptions of using seed phrases,” said Farida. “How do they go about protecting their seed phrases? How often have people lost money? And what are some things we could recommend to the cryptocurrency industry to make seed phrases better so that people can keep track of them?”
As an avid reader, Farida recognizes that promoting consumer literacy on themes like AI biases and data privacy works in tandem with developing technical solutions. Her second research project leverages TAIGA—Tool for Auditing Images Generated by AI—as an educational resource. TAIGA places AI in the user’s hands and walks them through a step-by-step process to examine how synthetic data often reinforces harmful biases. First, users enter a search query which leads to an AI-generated image gallery. Users are then invited to highlight images and write about any biased representations, which can then be posted as a thread to a WeAudit forum where others have shared the biases they’ve identified.
Each of these research projects speaks to Farida’s determination to be a subject matter expert who’s concerned with both sides of the equation—the production of technical tools and the implementation of effective policies and educational materials for users. Both of those concerns prompt unique questions, Farida explained. For the technical aspect, there tend to be questions of efficiency. “Is my machine learning model fast?” “Does it produce accurate results?” For questions of policy, ethics is the matter at hand. “How are people affected by the actions of this model?” “How diverse is my dataset?”
While commencement in May is fast approaching, Farida isn’t done with school just yet. In fact, she hopes to enroll in a Ph.D. program that will allow her to continue studying AI bias or privacy and security.