My name is Jinwook Kim. My goal is to make a product or service that can change people's life!
I'm interested in HCI, VR/AR and Cognitive Science. Specifically, I am working on spatial interaction that augment human performance, perception, and immersive experience by using multimodal I/O in XR. Also, I like playing soccer and watching movie.
Jinwook Kim
+82 10-9477-5794
jinwook.kim31@gmail.com
Graduate School of Culture Technology (Ph.D. Candidate) • 2021 ~
Visual Cognition Lab • Advisor: Jeongmi Lee
Graduate School of Culture Technology (M.S.) • 2019 ~ 2021 •Advisor: Jeongmi Lee
Thesis: Multisensory Pseudo-Haptic Feedback for Weight Perception of Virtual Objects
Computer Science & Engineering (B.S.) • 2014 ~ 2018 • Global SW Track Member
Visiting Researcher• Apr~Jul, 2024
Visited XI research group at Aarhus University (Advisor: Prof. Ken Pfeuffer) and conducted Gaze-based XR interaction (Gaze & Pinch) development research.
Research Assistant• 2020 ~ 2022
Worked at Center for Cognition and Sociality (Advisor: Dr. Yee Joon Kim). Learned EEG signal processing and data analysis using MATLAB, MNE. Based on the study, I conducted EEG experiment measuring Vection and P3 Oddball ERP in VR.
UX Researcher• Jun ~ Dec, 2018
Worked at Companoid Labs, Seoul National University Advanced Institute of Convergence Technology (SNU AICT). During the internship, I developed an interactive social robot (Petbe) and published at the HRI conference. After the internship, I analyzed user data of 'Doctor Diary' app (Diabetes logging app) for app redesign to enhance UX depending on the user type.
Intern / Programmer• 2015 / 2017 / Apr ~ Jun, 2018
Worked at ETRI Start-up Company (Total 7 month; 2 + 2 + 3). Develop Android GPS Application for localization with LoRa signal and build LoRa node prototype with Rpi and Arduino.
KAIST Scholarship (4M Won)• July, 2024
Best Paper Award• March, 2024
Audience Choice Award• November, 2022
NC-KAIST Scholarship (10M Won)• June, 2021
2nd Prize (Naver Z)• May, 2019
Dean Award• December, 2017
1st Prize• Jun~ Aug, 2017
2nd Prize• Aug 21~26, 2016
Semi Final (TOP 7)• March 2016
Research Goal: Augment human performance, perception, and immersive experience by using multimodal I/O in XR.
You can find full publication list in Google Scholar or CV.
CHI Conference on Human Factors in Computing Systems, 2024• C.Lee*, J.Kim*, H.Yi, W.Lee
IEEE Transactions on Visualization and Computer Graphics, 2024 (Best Paper Award)• S.Jeong, J.Kim, J.Lee
IEEE Transactions on Visualization and Computer Graphics, 2023• S.Lee, J.Kim, J.Lee
International Journal of Human–Computer Interaction, 2022• J.Kim, H.Jang, D.Kim, J.Lee
IEEE Access, 2022• J.Kim, S.kim, J.Lee
HRI 2023 Short paper• J.Kim*, D.Kim*, B.Kim, H.Kim, J.Lee
CHI Play 2022 SGDC (Audience Choice Award)• J.Kim, P.Koh, S.Kang, H.Jang, J.Lee, J.Nam, Y.Y.Doh
HRI 2020 Abstract• J.Kim, K.Baek, J.Jang
Art-Tech / VR, AR, Robotics• 2021 ~ 2022
We conducted a project with Hyundai ZER01NE 2021, Automatic Sonata (2021) which proposes a new meaning for such spaces and changes them into purposed spaces. For next year, we build a Holo-Bot (2022), which is a hologram-based next-generation XR social interface robot. Also, during the project, I participated in a study guild and build AR art projects with the SPOT from Boston Dynamics.
Mentoring• 2019 ~ 2022
I participated in various programming education classes as a mentor in Naver Connect (Python, C, Data Analysis), MODU Lab (ML/DL), and KAIST Gifted Youth Camp (Block Programming), ARKO ART Center ART TALK (VR). Also, I developed Unity education content based on LEGO Microgame for KAIST Cyber Talented Center.
CNU Scholarship Program• Jan ~ Feb, 2017
I was selected to the program member and worked on 'Group membership and authentication (Intel)' project with students at Purdue University for two month. Also, I attended 'Design & Innovation', 'Data mining & Machine Learning' class during the visit.
Programming: Python (ML, Data Analysis, Visualization), Java, Unity C#, C
IoT Application: Arduion, Raspberry Pi, Sensors, Circuit Design, Flask (REST API)
XR Prototyping: ARFoundation, UX Research
Bio-Signal: Cognionics (EEG Quick30), Emotiv, NextMind, MNE, MATLAB (EEGLAB, ERPLAB)
"내 움직임에 집중하고 그것을 느끼는 것, 이것만으로도 나는 위로 받았다."
Self-Salutation 시리즈는 주의를 바짝 기울였을 때에만 느낄 수 있는 '스스로의 상호작용'에서 출발한 작업이다. Salutation은 인사라는 뜻이며, Salut는 구원과 안녕, 그리고 바다 위 배들 사이에 오가는 확인신호의 의미를 갖는다.
Ver.4 '선명하게 다가가기'에서는, 증강현실(AR) 기술을 사용해, 불확실성의 연속인 삶 속 불안이나 안도와 같이 ‘보이지 않는 것’을 ‘보이는 것’으로 드러낸다. 몸과 마음의 끊임없는 신호들을 읽어내는 가운데 보다 선명하게 자신을 인지하고, 이를 통해 스스로를 구원해가는 과정을 담았다.
Made up of Jinwook Kim, Dooyoung Kim, Hyunchul Kim, and Bowon Kim, ZER01NE Creator CT3K continually seeks to break down the boundaries between real and virtual. Noting that human social connections have transcended the constraints of physical space, CT3K considers new modes of connection and coexistence through the medium of robots.
As a project, Holobot: Hologram-Based Next Generation XR Social Interface Robot began with a simple question — what if it were really possible to be connected to a different being and feel feelings actually taking place in a different space, like in the movie Avatar(2009)? Using a tele-presence system that gives the feeling of being face to face with one’s counterpart, Holobot proposes the possibility of a world of extended reality (XR). Through a display, remote users in other spaces are augmented and displayed in the form of holographic avatars, creating a sense of space and dimension. Remote users connected to Holobot can communicate audio-visually through VR, and move freely by using a controller. What is more, the system includes the ability to recognize and avoid various obstacles, helping users to overcome environmental limitations they failed to predict. Functions like these allow remote users to exist as subjects in space, eliciting a sense of coexistence with people plugging in from other locations.
ZER01NE Creator CT3K looks at robots as mediators for spatial connection. Indeed, according to CT3K, the true existential function of robots is not to replace humans, but to bring humans ever closer to one another. At the same time, CT3K also opens up the possibility of scaling to future mobility technologies beyond relatable interfaces, raising questions about future mobility. By presenting alternatives to movement, or even modes of movement that transcend the physical realm, this creator proposes a new direction for the development of mobility technology.
Along with the development of VR HMDs, there is a growing number of VR games published in the game market recently. However, most of the games are targeted to single player and VR games that are played with other platform have yet to be fully explored. Therefore we propose a VR-mobile multi-platform game, Seung-ee & Kkaebi. The level design for cross platform game with VR was the most challenging part. We tried to solve it by maximizing the pros and converts cons (e.g., limited Field of View of mobile, limited space for locomotion for VR) of each platform as a game feature. VR player plays the game as Jangseung (Seek), which is a Korean traditional totem pole, and mobile user as Dokkaebi (Hide), a mischievous pranksters. Each platform has their own unique skill set to interrupt each other in order to win the game.
Game, Multi-platform, Virtual Reality, Mobile, Hide & SeekAlthough recent developments in VR tracking technology have enabled VR to become more portable and convenient, VR must overcome limited traversable space to be used for more general purposes. Since future VR HMDs aim to embed bio-sensors, such as eye-tracking and electroencephalography (EEG), the present study investigates how bio-sensors could be integrated into VR locomotion and how usability could be improved. In this paper, teleportation methods were developed by combining EEG, eye-tracking, and hand-tracking for location targeting and teleport triggering processes. In a static teleportation task, we compared the efficiency, accuracy, and usability of each method as well as the interactions between different combinations of methods. The experimental results verified that these locomotion methods were overall suitable for hands-free VR contents, and revealed relative strengths and weaknesses of each method. Also, we identified the appropriate combinations of locomotion methods based on context, and components that require improvement for future applications.
Virtual Reality, Locomotion, Bio-sensorDreamy is an emotional robot produced by Dreamcatcher, which means that hanging near sleeping places will catch nightmares and make you have good dreams. "Dream" listens to the story of the shape you saw in your last dream and visualizes the contents as an image collage. Instead of the dream fading out of human memory, the robot remembers the dream forever. The dream data gathered in this way goes further and forms a new and large collage. The ever-increasing number at the top of the screen is a virtual number representing the number of dream data left by "Dream," and the image projected into space through the projector is a set of numerous dreams seen by robots (systems). Data is reflected according to people's dreams, and their appearance constantly changes.
Social Robot, Smart Speaker As the saying goes, one picture is worth a thousand words." This phrase states that a simple
visual representation trumps written forms of information. Hence my team and I developed a
wearable device that can help users demonstrate their 'Idea of Product' inside virtual reality.
First, the company designed their idea into a 3D model. This design was then transferred into
the VR Application. As customers then equipped HMDs (Google Cardboard, GearVR) and the glove
to control a virtual hand inside the VR Application, they were able to experience our product.
The glove was made with an Arduino 101, and was connected to a GearVR via Bluetooth. We
then used Kalman Filter to receive transmitted hand motion data. The VR Application was designed,
using Unity and VR Library.
CT3K is a team composed of a professional AR/VR engineer, an aHCI Researcher, and a Graphics Researcher. They formed this team around the notion that automatic driving will fundamentally change the meaning of space in vehicles. CT3K tries to define the new meaning of such spaces and change them into purposed spaces.
The core technology of their project 〈Automatic Sonata〉 is Style-Transfer, an AI technology that transforms the landscape out of windows based on the real-time road and environment information data collected by operating vehicles. The windows of vehicles, acting as a display, cover the eyesight of passengers and create a spatial feeling that crosses over reality and virtual reality. This unique visual experience, combined with repetitive and monotonous landscapes, changes the meaning of vehicles from a means of transport to a valued space.
In this exhibition, audiences can not only see how the real-time landscape changes while driving through three-sided displays and how different these changes can be, but also imagine the potential of mobility space development in the era of automatic driving. The landscape view changes by using three major painting styles of van Gogh, Picasso, and Kandinsky, representative artists of Western modern art with distinct painting styles. CT3K suggests the value of space in vehicles through selectable daily experience and arousing moments that provide a chance to imagine a new future environment.
2021 NextRise @ ZER01NE
Augmented Reality, Robotics, Unity Students living in dormitories probably experienced a common issue: waiting for their turn to use
the washing machine. Our team conducted a survey during a time period of two weeks. According
to the collected responses, we concluded that students have a similar schedule despite taking
different subjects, thus 7~9 pm the laundry rush hour. Due to this inefficiency, lot's of time is
wasted, and many cases of lost laundry.
So we designed an IoT service smart phone application that shows if the washing machines are
in use. Our team made the Android and IOS versions of the application. I designed a Wi-Fi current
meter module with an ESP8266 with Arduino and a 3D printer. I also made database with Raspberry
pi and test with PHP HTTP communication.
We are preparing to install our product into all dormitories in Chungnam National University, and
make profit using Google Ads.
IoT (Internet of Things) technology is becoming a notable one due to its capabilities. Many IoT
products use Wi-Fi, Bluetooth, or Zigbee for communication. However, these technologies are
expensive and power-consuming. Our team designed an IoT product based on Li-Fi to resolve these
problems.
Li-Fi is a communication technology that uses LEDs. LEDs can commonly be found in households.
We designed a module using a Raspberry Pi 3 to control LEDs to send light signals. An Arduino
and a photo diode receive and decode the signal to control various devices.
I took charge of making the Arduino modules. I also programmed the send/receive algorithms
using Manchester encoding, and printed the module case using a 3D printer.
Recently, drone technology is gaining popularity, and related research is pushed actively.
Predominantly, intelligence programs in drones play a key role in the commercialization of this
modern machinery. To achieve in the creation of smart drones, a real time image processor is
essential. My team used a FPGA board and a SoC Drone to accelerate algorithm calculations while
using small amounts of power.
Our aim was to make the drone recognize, then follow a red car using image processing. So we
implemented a FAST9 algorithm in C and C++ to understand the algorithm, which was then
converted into Verilog. Finally, we programmed the FPGA interlock using the drone's device driver.
In the CNU pre-Capstone Design Program, applicants were required to work with students from
different majors. In our case we worked with students from the Electronic Engineering and
Mechatronics Engineering departments. I took charge of implementing the image processing
algorithm, and was a Team Leader in this collaboration.
The importance of nonverbal communication is stressed amongst the most prominent speakers and
lecturers. However, there are limitations of this type of expression when the presenter is occupied
with operating a microphone, or a wireless presenter. These distractions degrade the quality of
communication, thus ultimately lowering the quality of the presentation.
So our team adapted wireless presenter functions to a glove to resolve this issue. We used an
Arduino Leonardo and RN42(HID) Bluetooth to control slides. We also attached a laser module that
can act as a pointer, and a LCD that can show the time.
I was the team leader of this project, and was in charge of hardware part and Bluetooth
communications via PC.
The demand for pets is increasing due to the increasing proportion of single-person households and the trend of nuclear family formation and aging society. For this reason, many related services such as IoT devices and cafes are being developed. In this situation, we surveyed office workers and students in their 20s and 30s who raise pets and found that they were worried about pets staying alone for 6 to 12 hours a day, and found that this could lead to separation anxiety.
To solve this problem, we collaborated with a veterinary student at Chungnam National University. Veterinary analysis of the behavior of pets and deliver them to their pets through SNS Chatbot. It periodically delivers actions such as trying to cause accidents, eating, or waiting for the owner by message to the owner. When the companion responds, the IoT device also gives feedback to the companion animal by voice or motion. This service can give pets the pleasure of being together anytime, anywhere, and prevent separation anxiety.
The demand for pet monitoring devices is growing due to the increasing number of one-person households raising pets. However, current monitoring methods using video camera entail various problems, which may lead to discontinued usage. To overcome this problem, we propose Petbe, a social robot that projects your own pet using a context-aware approach based on BLE beacons and Raspberry Pis. The corresponding smartphone application provides various robot status updates (robot head) and movements (robot body). With the development of Petbe, we conducted an exploratory study to verify the advancement of the above issues on monitoring user's own pets with the following factors: privacy concern, companionship, awareness, connectivity, and satisfaction. The outcomes indicate that Petbe helps to reduce privacy concerns and build companionship through empathetic interaction.
IoT, Social Robot, Context Aware, BLEThe displacement between a virtual hand and the real hand is currently studied to assign weight to a virtual object. Rietzle et al. (2018) and Samad et al. (2019) developed a C/D ratio manipulation and conducted experiments to get an appropriate offset value for weight assignment and an acceptable C/D ratio. Even though these approaches could be effective for short-term simple interactions, the displacement could cause several issues in long-term complex interactions. Therefore, we are working on another method to assign weight to a virtual object that could solve this problem.
Pseudo-haptic, Weight perception, VRIf you have any questions, please contact me!
Thank you!!