This is an introduction of laboratories on the website as an alternative to the cancelled "Open Lab Autumn Stage" in FY2022.
Web version of Open Lab (Introduction of laboratories)
AI, machine learning, and applications |
|||
In this lab, students are conducting researches related to the following topics (not limited): Through experiments students will become familiar with various machine learning tools and know how to solve practical problems. |
We are conducting joint researches with several companies in Japan. Students will learn how to solve real problems by combining both high-tech (e.g. deep learning) and low-tech (convetional image/signal processing methods). |
Cybersecurity |
|||
While the development of information and communication technology has made people's lives more convenient, cyber attacks and cyber crimes have emerged as a negative aspect. Web page of research contents: |
|
Cognitive Science |
|||
Everyone has a family of friends named "Signal". They can be sensed everyday in our life. When you listen to the music, it is a 1D signal processed by ears. When you read a manga book, it is a 2D matrix signal processed by eyes. When you play games, the motion of the character in 3D world is connecting your feeling basically by the mixed signal system with sound, images and haptics. They are processed by our brain to build a fantastic experience that developers and artists hope to show us. Our lab cares signal and how it is working very much. Beyond the ability of sensing, it is able to help us to make decisions for not only behaviors as operating a machine or getting profit from financial trading in physical world, but also for creating smart behaviors of NPC as interaction in virtual world. Web page of research contents: |
|
Edge AI Research and Applications (Computer Engineering) |
![]() |
||
Efforts to use AI in the field (edge) (called edge AI) will increase in the future. Web page of research contents: |
The wildlife warning system is being demonstrated in Aizu Wakamatsu City, Kitakata City, and Aizu Misato Town. |
Motion sensing, Sensing technology |
|||
We are creating a digital internet world, which is independent from the real analog world we living in. But recently, as the development of the intelligent sensing technologies, the two worlds are merging together, so that in the near future, people could hardly to discriminate their difference. Our lab is focus on creating various new intelligent sensing technologies to facilitate such merging between the digital world and analog world. Lab H P:u-aizu.ac.jp/~leijing/ Demos 2.WonderSense: 3.Daily activity recognition: |
JING Lei With our data gloves, people can directly touch the virtual world. |
game AI |
|
||
AI (artificial intelligence) is the "brain" of a computer. As a general rule, we are always striving to make AI smarter. For example, in the same way that the facial recognition function of a smartphone is capable of recognizing a face even with glasses on, a robot vacuum cleaner can avoid socks that have been left on the floor. Sometimes, however, this cleverness can also cause undesirable results. For example, clever AI can easily defeat humans in games such as chess or shogi without making any silly mistakes. Therefore, simply "creating clever AI" is not always the goal. We are using games to research various AI systems. Games need to be fun, so we need to understand what type of AI will serve as a suitable teammate or an intriguing enemy character. In our research, we focus primarily on "realistic" or human-like AI. For example, in sports games such as soccer and fighting, it is desirable to have various opponents with diverse qualities and habits, and it is important to ensure that the characters in the game do not make robot-like movements at that time. Studying human behavior is essential for creating such "realistic" characters. First, we record the behavior of characters and analyze what behavior they have in common. This is a difficult task, and our approach will depend on the genre of the game. Currently, we are studying the relatively simple games of tennis, fighting, and soccer. Soccer is particularly interesting because it requires understanding the movements of the team as a whole rather than individual plays. Games provide a simple, enjoyable experimental environment using AI systems. At the same time, we believe that our research results can be applied to areas other than games. "Fun AI" and "human-like AI systems" have significant potential for education and medical care. By studying AI, we can understand why we feel something is "fun" or "real," and therefore better understand ourselves. Web page of research contents: |
|
Pattern Recognition |
![]() |
||
Our laboratory focuses primarily on human-computer interaction, pattern processing, and recognition based on signal and image analysis. As part of our pattern processing research, we have done lots of research in hand written character recognition, signature verification, font generation, human identification, human activity recognition and so on. Web page of research contents: |
|
game design |
|||
Multimodal interaction refers to the integration of control (input) and display (output) of media, including visual, auditory, and haptic modalities: graphics, sound, and touch. The Spatial Media Group in the Computer Arts Lab conducts research on practical and creative interfaces to enhance communication and expression: 3D graphics and panoramic ("360°") imagery; spatial sound, audio, and computer music; digital typography, hypermedia and electronic publishing; smartphone & mobile computing; XR (extended reality): VR (virtual reality), MR (mixed reality) & AR (augmented reality) for immersive sensation. We investigate various kinds of stereoscopic displays ("3D" imagery) that express not only width and height but also depth,and multichannel (polyphonic) spatial audio ("3D sound") systems with rich, dynamic soundscapes, realtime applications that close the feedback loop between input and output for immediate reaction and live, online experience. We explore user interfaces that incorporate vision, hearing, and proprioception, including 3D printing, physical rigging (connection of physical controllers to virtual objects), IoT (internet of things) and "ubicomp" (ubiquitous computing) for cyberphysical systems. Groupware is software for groups, allowing teamwork and collaboration, including conferencing, team design, and musical ensemble performance. Our focus is exploration of networked multiuser interfaces with realtime multimodal interactivity, including visual musical systems, games & toys, simulations, and story-telling. Web page of research contents: |
|
Biomedical Engineering
|
|||
Web page of research contents: |
To contribute to people's disease prevention and health promotion by seamlessly measuring and comprehensively analyzing various biological information anytime and anywhere without affecting daily life |
Visually appealing computer visualization |
![]() |
||
We are researching computer visualization, a technology that helps people understand information by converting various types of data into visual information that can be seen by the human eye. Web page of research contents: Demonstration Video: Research introduction video (2020 version): |
|
System frameworks for robots and sensors |
![]() ![]() |
||
For robots acting autonomously, we need many functions such as object recognition, localization, motion planning, control, learning, etc., and computer systems for them. For realizing the autonomous robots, we research and develop the following topics:
Web page of research content: |
Our goal is to get robots closer to our daily life such as smartphones, which means multiple robots and sensors should be connected to networks, share data among them, and acts coordinately. We research and develop frameworks for the goal. |
Big Data and AI |
|||
1. automatic AI - Deep Learning (DL) service generation Web page of research contents: |
Building various artificial intelligence systems: translation, QA systems, medical, monitoring factory systems, etc. |
An Invitation to Deep Space |
|||
Students join deep space explorations with faculty members and lunar and planetary archived data science as PBL. We research software development, data curation, and data analysis daily in combination with remote sensing, machine learning, etc. INTERVIEW_HIROHIDE DEMURA【FUKUSHIMA INDEX】 Aizu >
Research Center & Cluster Introduction & Cluster Introduction & Cluster Introduction for Space Informatics(ARC-Space) |
Let's go to space together from Fukushima! We dream of a future where Fukushima-made robots are active on the Moon. |
Data-oriented research on understanding the Moon, Mars and the Earth (Space and planetary informatics) |
![]() |
||
The Moon, Mars and the Earth are similar, but still significantly different. The Earth is very unique with oceans and life. The current environments on Mars and the Moon are both like desert on the Earth, no ocean and no life. The 3 planetary bodies originated in almost the same region within the solar system. Why are they so different? I am interested in that point. We have a huge amount of data of great variety on the 3 planets. We are using those data to study the surface environment and inner structure of each planet. We are analyzing the exploration data from the Moon and Mars. We also develop tools to analyze and visualize the data. We are also conducting studies to detect ground movement based on the satellite data and to monitor active volcanoes on the Earth. |
OGAWA Yoshiko The Moon and Mars are our Earth neighbors. However, lots of things are still veiled. Do we really understand the Moon and Mars? Why is the Earth so unique with diversity of life? Your curiosity is a great motivation. We look forward to having you join in our data-oriented planetary science! |
We're Onkyo Lab. We are interested in sound and audio. |
|
||
We use sound regularly to communicate with others, yet our understanding of it is so limited that there are many opportunities for new technologies to be discovered. We are interested in: Check our website to learn mor about who we are and what we do: |
JULIAN Villegas We are interested in sound as a vehicle to transmit information between humans and machines. In our research, we often rely on machine learning methods and focus mainly on spatial sound, applied psychoacoustics, and applied phonetics. |
Planetary exploration, planetary science |
|||
Our laboratory is interested in using lunar and planetary exploration data to investigate the craters, rock masses, and lava flow trajectories visible on lunar and planetary surfaces. |
HONDA Chikatoshi We are working with students to ensure that the results of our research and development will contribute to future lunar and planetary exploration. |
liberal arts education |
|
||
The University of Aizu expects its students to become rich and healthy in mind and body at the same time as attaining a high degree of expertise in computer science and technology, so they can graduate and take responsibility for the future. |
KARIMAZAWA,Hayato |
Video Introduction to the CLR Phonetics Lab" (phonetics) |
|
||
The Center for Language Research at the University of Aizu was established in 1993 to foster research in English for Specific Purposes, with special emphasis on the English needed for study and work in the fields of computer science and computer engineering. In addition to research on other areas of second language acquisition, the CLR has always had a strong commitment to research on pronunciation and phonetics. Prof. Ian Wilson joined the university in 2006, and with the help of Prof. Kazuaki Yamauchi, acquired funding for an ultrasound machine, and established a separate laboratory dedicated to speech research - the CLR Phonetics Lab. This led to a great increase in the number of students doing phonetics research. Besides Profs. Wilson and Yamauchi, other professors in the lab include Prof. Kaneko and Prof. Perkins. Prof. Wilson's main research interest is speech production and L2 acquisition of pronunciation. Experimental phonetics - both articulatory and acoustic phonetics - underlies most of his work. Since 2000, he has specialized in using ultrasound as a tool to view and measure the tongue during speech. Prof. Yamauchi is a native speaker and researcher of the Aizu dialect of Japanese. Prof. Kaneko also does research on the Aizu dialect, as well as English pronunciation and elicited imitation/shadowing of speech. Prof. Perkins is focused on tone and phonation, and he does a lot of fieldwork on Asian languages. To learn more about the professors and students in our lab, please access the People link on our lab homepage. Detailed research information and downloadable papers are available from the Research link. Newspaper articles and news reports about our lab are available from the Media link. This entire website is available in both English and Japanese. CLR Phonetics Lab website: |
This research informs pronunciation teachers and speech scientists about the differences in pronunciation between languages - especially Japanese and English. |
Phonetics and Phonology |
|
||
My main research interests involve sound systems in languages of the world. Web page of research contents: |
|