Tech + Research Workshop
The Department of Computer Science at the University of Maryland and the Center for Women in Computing are pleased to present the fifth year of Tech + Research: Welcoming Women to Computing Research, a research workshop geared towards engaging undergraduate women in computing held in collaboration with Technica. During this workshop, student teams will come together and collaboratively work together to use technology to solve pressing issues.
Technica and Tech + Research will be a hybrid experience in 2022!
Parallel to Technica, the largest hackathon for underrepresented genders in the nation, students will participate in the Research track at Technica. The weekend event will bring together computing faculty from institutions across the state of Maryland to serve as mentors on projects in their research areas. Along with providing hands-on research experience in a dynamic hackathon setting, the weekend workshop will include virtual sessions introducing attendees to the basics of computer science research (CSR) and highlighting the exciting opportunities that come with pursuing a graduate degree in computer science.
Note: Please be aware that this event involves separate programming from Technica, and the majority of the programming will take place with the Maryland Center for Women in Computing. However, you will have full access to Technica including the Career Fair and Keynote Speakers.
IMPORTANT: YOU MUST REGISTER FOR TECHNICA- RESEARCH TRACK AND COMPLETE THE ADDITONAL TECH + RESEARCH APPLICATION
This workshop hopes to give undergraduate CS students who identify as an underrepresented gender in computing an opportunity to learn about future computer science research opportunities and to provide hands-on experience engaging in CS research in a hackathon setting. Additionally, we plan for this event to allow students to meet computing faculty and current graduate students and to socialize and collaborate with like-minded peers. By providing a positive intellectual, social, and emotional environment for the participants to meaningfully engage in computing research, we hope to directly address gender gaps that currently exists in CS departments in higher education.
Attendees of this event will not only be expanding their CS skills, they will also be given the opportunity to meet and network with many individuals who are a part of the CS community at the University of Maryland.
Workshop participants will:
- Meet others who share their curiosity and interest in computer science.
- Explore the research experience in computing related domains.
- Work hands-on with researchers.
- Work in a team to tackle a research problem.
- Present their research with their team.
- Broaden understanding of the possibilities of graduate school and the application process.
Surrounding area schools and departments were invited to submit research projects. Projects from the following departments have been submitted in previous Tech + Research workshops:
University of Maryland, College Park
- Department of Computer Science
- Department of Electrical and Computing Engineering
- College of Information Systems
- College of Education
Deep networks can be very smart. For example, from a single picture of your pet, they can distinguish between dozens and dozens of different species. On the other hand, deep networks can be made to seem foolish: by changing your pet’s picture just slightly, the deep network might suddenly think that it is now a traffic light. These slight changes are known as adversarial attacks. What happens when you ask these networks what the ‘doggiest dog’ looks like? Turns out, a normal network will just return noise, but when we train the network to still work on adversarial attacks, we end up getting brand new images that might surprise you!
Studying adversarial examples, how to generate them, and how to train models so that they are defended against them, is known as the field of adversarial robustness. A simple mechanism to make models more robust is adversarial training, where adversarial examples are generated while the model is training, and included in the training set, so that the model learns to still classify them correctly. A property of robust models is that they have perceptually aligned gradients. This means that when we generate an image based on the gradients of the network so that the new image is classified very confidently as a desired class, the new image will look like something that matches our intuition and perception of the class. In this project, we will create adversarial attacks, train a model to be robust against attacks, and then use the same attack to generate new, surreal images.
Assistant Professor, Computer Science, UMIACS
Have you ever wondered how courses get scheduled? Each course needs to be assigned to a suitably large classroom at a time its instructor finds acceptable. Unfortunately, not all classrooms are big enough for all courses, and not all instructors' (and students') most-preferred times will be available. How can we meet our minimum goal (schedule all classes) and maximize our constituents' preferences, even as we have thousands of students, dozens or hundreds of courses, and dozens of rooms? This is a real problem for UMD CS!
This research project will explore different approaches to solving this problem. Student researchers will develop solutions to the problem and scientifically compare them both analytically and empirically. The mentors will teach the participants about cutting-edge automated reasoning technology based on SMT -- "SAT modulo theories" -- solvers, which can be used as the basis for a solution, and which is seeing increasing use in industry, particularly at Amazon. With this technology you can specify constraints on your solution, and the solver will automatically find it. But it takes some skill to encode a solution amenable to SMT -- the mentors will help the students develop this skill. As a stretch goal, software that students develop could be used in UMD's actual scheduling process!
Professor, Computer Science, UMIACS
Autonomous driving and deep learning are two trending topics today. Can we use deep learning to power autonomous driving and let a vehicle drive itself via images from a single front-facing camera? This project involves exploring how to set up a deep learning framework, how to collect training data in Unity, how to train a deep neural network, and eventually using the trained network to steer a vehicle on both training and testing routes. Basically the project will build a deep-learning-based system that learns from a large set of simulation data from a driving simulator and attempts to model a system for a safer navigation for autonomous vehicles or driverless cars.
Professor, CRA DREU Co-Director, Computer Science, UMIACS
Internet censorship is a problem that affects billions of people around the world. Nation-states engage in automated, in-network censorship of their citizens, and citizens frequently are not told what is blocked or why. In this project, students will analyze very large open source datasets from tools that monitor the occurrence of network censorship. Students will determine where around the world network censorship is occurring, what content is censored, and when. The goal of this project is to have a mechanism to detect new censorship events as they happen. Students will also be encouraged to ask their own research questions about the data. Examples include: are censorship events correlated with political events at the same time? How long do censorship events usually last (and how does that change between regions of the world?) and are there patterns to what data is censored?
Assistant Professor, Computer Science, UMIACS
Every day, more than 50 million older Americans need to make important financial decisions. Becoming older may affect decision-making abilities that can affect their financial freedom and security. Families that stay connected have a greater chance to thrive and need an effective way to connect with older adults over finances. There are a few factors that contribute to poor decision making-complexity of financial decisions for older people, access to advice, coercion and fraud. With this tool we attempt to enhance the quality of life for older adults and their families by connecting them and their finances.
Machine translation (MT) tools based on AI technology make it possible for virtually anyone to translate text into many of the world’s languages, and thus hold the promise of enabling seamless communication across language barriers. However, they still make many errors, and these errors are hard to catch by users who are not fluent in the languages involved. As a result MT is sometimes used inappropriately, even in high-stakes settings such as hospitals or court rooms where errors can have severe consequences . In this project, we will investigate methods to help users assess the quality of MT outputs, so they can decide when it is appropriate to rely on MT or not. We will develop metrics to automatically estimate how good a translation is, and use them to assist users who are not able to judge the AI translation themselves. We will evaluate whether these metrics can help them decide when to rely on MT appropriately.
Associate Professor, Computer Science, UMIACS
Modern analytical database systems – especially those running in the cloud – separate storage and compute. In other words, the datasets being analyzed sit on servers that are primarily designed to store the data. A different set of "compute" servers perform the actual analysis of data. For example, in Amazon's cloud, data can sit on S3 instances, and EC2 instances are spun up on the fly to analyze the data sitting in S3. Therefore at query time, data must be transferred from the S3 servers to the EC2 servers in order to perform the analysis. Since modern networks are fast, this transfer is not usually a bottleneck, and the convenience of spinning up and down compute servers on the fly (as needed) make it worth it to separate storage and compute in this way. However, in some cases, the storage layer has some basic (or even advanced) query processing capabilities, and it is worthwhile to push down some query processing to the storage layer. This project involves performing research into such hybrid architectures.
Darnell-Kanal Professor of Computer Science
A key takeaway of the COVID19 pandemic was the need for timely, relevant and actionable information to support effective public messaging that can impact in-real-life (IRL) outcomes. The COVID19 pandemic also revealed the need for messaging and policy making at a local scale, when national- or state-level approaches might not appropriately address the needs at community scale. Frontline public health officials often had little insight into the individuals that they wished to serve, e.g., the willingness to wear a mask. The PandEval project will address these challenges by creating data collections and tools to assist public health officials.
The goal of this project is to create a curated collection of pandemic related messages on social media (Twitter). The collection will include messages that are issued by public health officials (PHOs) or public health experts (PHEs). The collection will also include direct responses such as likes, retweets and replies, as well as indirect responses, e.g., tweets on the same topic in the same time frame. The project team will analyze engagement statistics and visualize the level of engagement, at the level of an individual tweet or at the level of a PHO or PHE.
To combat declining trust in news in the United States, numerous tools have been created to increase transparency by providing contextual information around news content, but they have largely been developed without regard for usability. Research indicates that transparency, engagement, and racial and ideological diversity in the newsroom are important factors in influencing trust in news. In particular, 71% of Americans deem news publishers' commitment to transparency "very important" when making trust determinations. Tools designed to provide transparency around different aspects of the journalistic process have emerged recently, with some focused on sharing supporting documents, fact-checking claims in articles, and identifying potential misinformation on social media.
According to Michael Karlsson, a prominent scholar in the field of media and communication, there are three types of transparency that can be applied to news content. The first of these is disclosure transparency, which addresses how and why news is being made. The second is participatory transparency which invites non-journalists to engage in various parts of the news production process (e.g., commenting or sending in images of events). The third is ambient transparency, which includes the display of information near news content to support news consumers in evaluating and forming new meanings around that content. For example, adding hyperlinks, journalists’ personal opinions, or labels indicating whether a news story is considered opinion or news are all ways to provide ambient transparency.
In this project, we will examine several tools to identify the type(s) of transparency (disclosure, participatory, or ambient) information each tool aims to provide. The project team will also conduct a heuristic usability analysis of a subset of these transparency tools and identify common usability barriers.