Skip to main content

Tech + Research

Application Deadlines

Applications are currently closed.

The Iribe Initiative for Inclusion and Diversity in Computing (I4C), the Department of Computer Science and The Institute for Trustworthy AI in Law & Society host Tech + Research, a three-day workshop geared toward engaging undergraduate women and nonbinary students in computing research. 

Part of Technica, the world’s largest hackathon for underrepresented genders, Tech + Research participants work to solve pressing issues through hands-on computing research projects with UMD faculty. Students present their projects as part of the demo session at Technica. 

Frequently Asked Questions

Who will be leading the research projects?

Faculty members and graduate students from areas spanning computer science, information science, computer engineering, computing education, etc. lead the Tech + Research projects.

How do the research project teams work?

After completing your interest form, participants are divided into research teams to hack a real-world research problem presented by the researchers. During Technica weekend, participants come together to develop real-world solutions and prepare to share their ideas at the Expo on Sunday.

How will this workshop prepare me for future research projects and/or graduate school?

Along with providing hands-on research experience in a dynamic hackathon setting, the weekend workshop includes sessions on the basics of computer science research and the exciting opportunities that come with pursuing a graduate degree in computer science. 

What is the Tech + Research time commitment and how does that fit into Technica’s events?

This event involves separate programming from Technica. However, you will have full access to the hackathon’s career fair and keynote speakers. In addition to the bootcamp on Friday, you are expected to spend about 12 hours on your research project during the hackathon. 

Can I participate in Tech + Research virtually?

We have limited space for virtual participants for the full Technica weekend, but you can participate in the Friday bootcamp virtually.

Meet Our Program Staff

Tech + Research 2024 Confirmed Projects

Designing Voice User Interface Skills (Alexa) for Individuals Part of the Accessibility Community (Virtual)

Devpost: EchoAlert

Summary: The world of home conversational AI (intelligent home voice assistants like Amazon Alexa and Google Home) spans multiple areas. With this project, we hope to design a working VoiceFlow (skill/application) that serves those who are part of the accessibility community and have needs in their home that could be supported by voice technology. This could include supporting the home ecosystem environment through Voice UI, developing a skill that supports a particular population (aging adults, or cognitively impaired adults) or we ask that students bring in their own perspective in what is needed to be developed for the Voice platforms. 

Goals: To design a complete conversational flow that supports someone from the disability community in allowing them to have more independence and better quality of life through their interaction with their home technology device.

Deliverables: A complete VoiceFlow conversation flow and rationale for designing the flow and how it supports the accessibility community

Researchers:
Graduate Student | Ramita Shrestha | rshrest5@umd.edu
Galina Reitz | Faculty | IS Department  | gmreitz@umd.edu

Target Outcome: The goal is to develop a usable voice interface for those within the accessibility community (ie. aging older adults, blind users, motor/cognition difficulties). We will design a voice UX/UI and test it.

Expected Deliverables: Alexa conversation flow using VoiceFlow (a platform that supports the quick and easy design and development of voice skills)

Learning How Not to Drive (In Person Only)

Devpost: Predicting Human Behavior for AVs

Research Focus:  Safe Autonomous Driving

Summary: Recent advances in self-driving platforms, from food delivery robots to autonomous vehicles (AV), highlight the urgent need for strong safety standards.  AVs to operate in a mixed autonomy, where there will be human-driven and self-driving cars operating side by side on the road for the foreseeable future.  AVs require the capability to anticipate human-driven vehicle behaviors that will likely be different and less predictable than a self-driving car, thereby making it especially critical to capture large amounts of human driving data under diverse pre-crash scenarios.  This work aims to bridge this gap by employing a virtual reality (VR) vehicle simulator to immerse participants in various accident scenarios. This approach enables safe collections of human driving data and behaviors in high-risk situations, significantly enhancing our understanding of driving dynamics and safety.  The presentation will include background information, related works, methodology, results, and a conclusion, similar to a full research project. 

Goal: To design and create new scenarios using the game engine and the driving simulator to capture more real-world human driving data using the VR Driving Simulator.

Researchers:
Ming Lin | Faculty | CS Department | lin@umd.edu
Research Associates | Sandeep Thalapanane sandeept@umd.edu; Sandip Sharan Senthil Kumar sandip26@umd.edu; Sourang Sri Hari sourang@umd.edu

Target Outcome: For this project, we aim to give an introductory overview of immersive VR driving simulator to simulate a real-world driving experience. Students will present their experience and how they create a new adverse scenario for.

Expected Deliverables: Objectives for the weekend is to set up new scenarios in an immersive VR driving simulator to capture different driving styles and how they differ under varying adverse conditions, as well as analyzing how the collected data correlates with the driving personality.

Evaluating the impact of de-identification on social and behavioral research data (in-person)

DevpostImpact Of De-Identification On Behavioral Data

Research Focus: Conducting a technical analysis of de-identification tools to understand how they do or do not meet users’ needs.

Summary: Social and behavioral researchers often collect sensitive data about people. Publishing research data is beneficial—enabling replication and meta-analysis, as well as providing transparency for public funds—and is often required by journals and funders. To prevent harms and privacy violations to research participants, data must be de-identified, which is a complex and challenging task. Principled approaches to de-identification, such as differential privacy and k-anonymity, can help ensure that data meets certain standards of privacy. However, researchers understandably have concerns that this gain in privacy will be unacceptably offset by a loss of data utility or fairness.

As a first step towards addressing researchers’ concerns, we aim to establish a baseline understanding of how existing de-identification tools impact data utility. In this project, we will conduct experiments applying de-identification tools on published research data.

Goal: To understand how existing de-identification tools impact the utility of real research data, and to form hypotheses about how these tools could be better designed to meet the needs of social and behavioral researchers.

Deliverables: We hope to produce both quantitative metrics (examples will be provided) on the utility of de-identified data, as well as qualitative observations about the experience of using de-identification tools.

Researchers
Wentao Guo | wguo5@umd.edu| Graduate Student 
Emma Shroyereshroyer@umd.edu| Graduate Student 
Michelle Mazurek | mmazurek@umd.edu | Faculty | CS Department 

Examining Diffusion Models for Image Generation and Classification (In Person Only)

DevpostDiffusion Models for Image Generation

Research Focus: The goal of this project is to introduce students to image generation tasks in computer vision, namely using diffusion networks to generate images based on text prompts.  

Summary: Deep networks have shown strong capabilities for standard vision tasks such as image classification and object detection. More recently, generative networks have become extremely popular. These models are able to create detailed images that can even fool humans into believing that they are human-generated. Recently, diffusion models have become extremely popular in allowing users to create a highly detailed image using just a text prompt. In this project, students will be introduced to how diffusion networks work and explore their properties such as how different prompts can affect image generation quality. They will also have the opportunity to finetune these networks for their own custom images by introducing new concepts to an existing network. Finally, students can exploit these networks to classify unseen images. 

Goal: Teach students the basics of deep learning architectures, which are still relevant today; Teach students about diffusion models and how to finetune and debug them 

Deliverables: Students will analyze different prompts to explore the quality of existing diffusion models. They will also have finetuned models that can be applied to create custom images that can mix and match different concepts and styles! Lastly, they will be able to implement a simple idea to transform these models to perform classification. All of this will be done in a Google Colab notebook.

Researchers

Vatsal Agarwalvatsalag@umd.edu | Graduate Student 

Abhinav Shrivastava | Faculty | CS Department

“Hi. I love you. Please send me gift cards”: Analyzing Romance Scams Targeting Older Adults (In Person Only)

Devpost: Scam Data Visualizations

The world’s population is aging. At the same time, the number of scams targeting older adults (people who are 65 years or older) is increasing, and they are becoming more sophisticated. Many older adults seek a romantic relationship. Thus, they join online dating websites or apps. Cold-hearted scammers have leveraged this situation and sought older adults’ retirement savings, causing them monetary losses and emotional distress. In this project, students will seek to find insights on scams targeting older adults, in particular, romance scams.

Researchers:
Dave Levin | Faculty | CS Department | dml@cs.umd.edu
Julio Poveda | Graduate Student | CS Department | jpoveda@umd.edu

Target Outcome: Develop techniques to improve our understanding of and detection of romance scams targeting older adults.  

Expected Deliverables: Analysis of a dataset of romance scams targeting older adults. This can include quantitative or qualitative analyses.

FaunaFlow: Building Animal Observer via Point-tracking (In-Person Only)

DevpostFaunaFlow

Research Focus: Point tracking in videos involves following specific points or features across consecutive frames, allowing for the analysis of motion and trajectory. This technique utilizes algorithms to identify distinctive features in each frame and match them with corresponding points in subsequent frames. methods.

Summary: Point tracking in videos involves following specific points or features across consecutive frames, allowing for the analysis of motion and trajectory. This technique utilizes algorithms to identify distinctive features in each frame and match them with corresponding points in subsequent frames. In scientific applications, point tracking is crucial for animal behavior studies, enabling researchers to monitor movement patterns, speed, and interactions of individual animals or groups in their natural habitats. For instance, tracking points on a fish's body can reveal intricate swimming patterns, while following points on monkeys can provide insights into their tree-climbing movements, foraging behaviors, or social dynamics. In this project, students can apply any existing point-tracking models to insects like ants, flies, or bees or to animals like fish, rats, or monkeys. The goal is to develop either a re-identification strategy or some form of behavioral analysis. This hands-on approach allows students to apply tracking techniques to real-world animal studies, potentially uncovering new insights into animal behavior or improving existing tracking methods.

Goal: The goal is to develop either a re-identification strategy or some form of behavioral analysis. This hands-on approach allows students to apply tracking techniques to real-world animal studies, potentially uncovering new insights into animal behavior or improving existing tracking methods.

Deliverables: A demo on a standard dataset.

Researchers
Pulkit Kumarpulkit@umd.edu | Graduate Student 
Namitha Padmanabhan | namithap@umd.edu | Graduate Student 
Abhinav Shrivastava | Faculty | CS Department

Analyzing Nation-State Censorship Around the World (In Person Only)

Devpost: Censorship and Network Suppression

Many countries around the world censor internet traffic to control information, suppress political opposition, and even restrict access to basic information about reproductive health. The goal of this project will be to use public datasets to learn more about how censors operate. Researchers will pose questions like: 

  • Triangulating Internet censorship or Internet shutdown events using data from OONI, Cloudflare Radar, Internet Society, Censored Planet, and IODA
  • Determining the most common censorship circumvention tools used around the world
    Identifying the censorship topology of countries around the world
    Identifying networks and countries where there are currently gaps in censorship measurement efforts

Researchers:
Dave Levin  | Faculty | CS Department  | dml@cs.umd.edu
Sadia Nourin | Graduate Student | CS Department 

Target Outcome: Conduct data exploration and analysis of existing censorship measurement data to gain deeper insights into censorship behavior, infrastructure, and topology.

Expected Deliverables: Data analysis, scripts, visualizations, and descriptions of findings. 

Question Answering Calibration: AIs knowing when they're right (and wrong) In person

DevpostQA Calibration

Research Focus: In our Grounded QA task, we assess the reliability of QA models by evaluating their calibration, specifically focusing on how well the model's confidence in its predictions aligns with the accuracy of those predictions. To better understand this concept, we adopt the idea of a "buzz" from Trivia Quiz competitions. In this context, a buzz occurs when a player is confident enough to provide an answer before the question is fully revealed. Similarly, in our evaluation, we measure whether the model's prediction probability reflects its actual prediction accuracy.

Summary: Our research project is centered on evaluating question-answering (QA) systems, with a particular focus on their calibration. Calibration, in this context, refers to how closely a model’s confidence in its predictions matches the actual correctness of those predictions. This is crucial for ensuring that the model’s confidence reflects its reliability in real-world tasks. To measure calibration, we draw on the concept of a "buzz" from Trivia Quiz competitions, where participants buzz in with an answer as soon as they feel confident enough, often before hearing the full question. Similarly, we assess whether a QA model’s confidence aligns with its likelihood of making a correct prediction as the question is incrementally revealed.

A key feature of our approach is that questions are presented in stages, with the model producing a series of guesses and confidence scores at each step. This allows us to track how the model’s confidence evolves as it receives more information. Our evaluation focuses on three main objectives: 1) determining at which point in the question reveals the model becomes confident enough to produce a correct answer, 2) assessing whether the model’s confidence scores accurately reflect the correctness of its guesses, and 3) comparing the alignment between confidence and correctness in models versus human participants.

To quantify these dynamics, we use a novel metric called Average Expected Buzz, which measures the expected confidence level at which the model will likely buzz in with a correct prediction. This provides a comprehensive evaluation of the system's calibration.

After submission, we plan to test these models on adversarial questions crafted by human experts, specifically designed to be challenging at the final stage (or "run") of the question in a Trivia human-computer tournament. This will allow us to evaluate whether the submitted QA systems can consistently outperform human experts using our calibration metric.

The overarching goal of this project is to enhance the reliability of QA systems by improving the alignment between their confidence estimates and actual performance, making them more trustworthy for real-world applications that depend on accurate, well-calibrated decision-making under uncertainty.

Goal: The broader goal of this project is to improve the reliability and trustworthiness of QA models by ensuring that their confidence estimates are better aligned with their actual performance, ultimately enhancing their applicability in real-world tasks where decision-making based on uncertainty is crucial.

Deliverables: Submission to HuggingFace leaderboard

Researchers
Yoo Yeon Sungyysung53@umd.edu | Graduate Student 
Yu Houhouyu@umd.edu | Graduate Student 
Jordan Boyd-Graberying@umd.edu | Faculty | CS Department

Explore Our Other Programs