
Prof. Tae-Seong Kim
Kyung Hee University, Republic of Korea
Director, Bioimaging and Brain Engineering
Laboratory, Kyung Hee University, Republic of Korea
Tae-Seong Kim received the B.S.
degree in Biomedical Engineering from the University
of Southern California (USC) in 1991, M.S. degrees
in Biomedical and Electrical Engineering from USC in
1993 and 1998 respectively, and Ph.D. in Biomedical
Engineering from USC in 1999. After his postdoctoral
work in Cognitive Sciences at the University of
California at Irvine in 2000, he joined the Alfred
E. Mann Institute for Biomedical Engineering and
Dept. of Biomedical Engineering at USC as Research
Scientist and Research Assistant Professor. In 2004,
he moved to Kyung Hee University in Republic of
Korea where he is currently Professor in the
Department of Biomedical Engineering. His research
interests have spanned various areas of biomedical
imaging, bioelectromagnetism, neural engineering,
and assistive lifecare technologies. Dr. Kim has
been developing novel methodologies in the fields of
signal and image processing, pattern classification,
machine learning, and artificial intelligence.
Lately Dr. Kim has started novel projects in the
developments of smart robotics and machine vision
with deep learning methodologies. Dr. Kim has
published more than 350 papers and ten international
book chapters. He holds ten international and
domestic patents and has received ten best paper
awards.
Speech
Title: "Deep Learning AI Methodologies and Their
Applications in Biomedical Technologies"
Abstract: In the era of artificial
intelligence (AI), biomedical technologies are being
transformed into a new domain of biomedical AI. As
AI has wide applications in the field of biomedical
engineering and technologies, it will transform
medicine and healthcare in the near future. Among
various machine learning principles and techniques,
deep learning is leading this new development of
biomedical AI. In this presentation, major deep
learning principles and methodologies including
convolutional neural networks, recurrent neural
networks, auto-encoders, and reinforcement learning
will be introduced. Then their AI applications to
biomedical technologies will be presented including
biomedical computer-aided diagnostic (CAD) systems,
human activity recognition of assistive lifecare
systems, biomedical machine vision systems, and
humanoid robotics.

Prof. Hiroshi Noborio
Osaka Electro-Communication University, Japan
Hiroshi Noborio received the
B.Eng. and M.Eng. in Department of Computer Science
from Shizuoka University in 1982 and 1984,
respectively. In succession, he received the Ph.D.
in Department of Mechanical Engineering, Faculty of
Engineering Science, Osaka University in 1987. Then,
he moved to the Osaka Electro-Communication
University in Precision Engineering, Engineering
Informatics, and Computer Science departments. Also,
he worked in TU Munich supported by the
Humboldt-Fellowship as a guest researcher. His
research interests have spanned various areas of
Robotics, CV (computer vision), CG (computer
graphics), XR (VR+MR+AR). Prof. Noborio has
developed the sensor-based and model-based
navigation. Lately he has started to research
surgical guiding of dental implant and surgical
simulation/navigation of Liver, Kidney and Brain.
They were mainly supported by Grants-in-Aid for
Scientific Research program established by MEXT and
JSPS and so on. Dr. Noborio nominated his research
"Interference Check Algorithm Based on the
Representation" as one of works in "The History of
Robot Research and Development in Japan" as the 20th
anniversary of RSJ. Also, he published more than 200
papers and 35 international book chapters, and in
addition he has received best paper awards several
times.
Speech
Title: "Depth-Depth Matching of Virtual and Real
Images for a Surgical Navigation System"
Abstract: The key idea of our
surgical navigation system is the depth-depth
matching (DDM) of virtual and real organ images.
The depth image of virtual organ comes from
Z-buffer of GPU (Graphics Processing Unit) for a
virtual organ modeled by STL (Stereolithography)
data. On the other hand, a depth image of real
organ comes from some depth image by an
arbitrary depth camera for a real organ.
Therefore, in DDM, we need only
non-combinatorial L subtractions and additions
between virtual and real 2D depth images whose
pixel number is to be L. L is about hundred
thousand. On the other hand, the most popular
Iterative Closest Point (ICP) algorithm in Point
Cloud Library is time consuming for checking the
coincidence of two kinds of point clouds of
whole organs. The reason are as follows: (1) The
ICP needs combinatorial M*N calculation of the
Euclidean distances of 3D cloud points (M and N
are usually near hundred thousand). (2) Since a
real organ is obstructed by its patient’s body,
a captured direction is restricted as the top
view on or near the shadow-less lamp.