|
|
|
I am a roboticist with specialization in 3D computer vision. I was recently
founder/indiehacker of MassSim, which focused on creating an interactive simulator for digital humans.
Previously up to
2023 I was at TuSimple, where I created pose estimation, state estimation, sensor calibration, and
mapping
technology. Before that up to 2019 I built 360 degree image capture technology at Fyusion. I have been
a
part of multiple academic labs and internships. I obtained my Ph.D. from the Georgia Institute of
Technology in Fall 2019. I received my M.S. and B.S. in 2011 and 2008, respectively from Iowa State
University.
DEMOS
[+] SELECTED PUBLICATIONS
[+] |
(2019)
Transforming Multiple Visual
Surveys of a Natural Environment Into Time-Lapses.
[slides]
[video]
[dataset]
Shane Griffith, Frank Dellaert, and Cedric Pradalier.
International Journal of Robotics Research (IJRR).
(2016)
Reprojection Flow for Image Registration
Across
Seasons.
Shane Griffith and Cedric Pradalier.
British Machine Vision Conference (BMVC), York, UK.
I completed my PhD during a time of rapid churn; deep learning quickly overtook most areas of AI
as the
state-of-the-art. Yet, I'd already undertaken work on a field robot, building on well-known,
classic perception
algorithms. We were trying to solve the hard problem of visual data association across
seasons in a natural
environment. A recurring question was whether and how much deep learning I should use. It
turned out that 3D visual geometry, as we can get with classic techniques, was
a key source of
information for our problem. This was supported by some nice results from biology as well.
And such was the backdrop that set the stage to confront the limitations of vision in natural
environments. The outlook
became clearer, as I articulated in my dissertation:
Vision is one of the primary sensory modalities of animals and robots, yet among robots it
still
has
limited power in natural environments. Dynamic processes of Nature continuously change how an
environment looks, which work against appearance-based methods for visual data association. As
a
robot is deployed again and again, the possibility of finding correspondences diminishes
between
surveys increasingly separated in time. This is a major limitation of intelligent systems
targeted
for precision agriculture, search and rescue, and environment monitoring. We
sought a new approach to visual data association to overcome the variation in appearance of a
natural environment, as it was experienced by a field robot over several years.
We found success with a map-centric approach, which builds on 3D vision to achieve visual
data association across seasons. We first created the Symphony Lake Dataset, which consists
of fortnightly visual surveys of a 1.3 km lakeshore captured from an autonomous surface
vehicle
over three years. We then established dense correspondence as a technique to both provide
robust
visual data association and to eliminate the variation in viewpoint between surveys. Given a
consistent map and localized poses, we next found that we could achieve visual data
association across seasons by integrating map point priors and geometric constraints into the
dense correspondence
image
alignment optimization. We called this algorithm "Reprojection Flow".
We presented the first work to see through the variation in appearance across seasons in
a natural environment using map point priors and localized poses. Our algorithm for
map-anchored
dense correspondence showed a substantial gain in visual data association in the midst of the
difficult variation in appearance. Up to 37 surveys were transformed into year-long
time-lapses at
the scenes where their maps were consistent. This indicates that, at a time when frequent
advancements are made using deep learning towards robust visual data association, the spatial
information in a map may
be able to close the distance where hard cases have persisted between observations.
|
[+] |
(2013)
Policy Shaping: Integrating Human Feedback with
Reinforcement
Learning.
[slides]
[code]
[appendix]
Shane Griffith, Kaushik Subramanian, Jon Scholz, Charles Isbell, and Andrea Thomaz.
Advances in Neural Information Processing Systems (NeurIPS), 2625-2633, Lake Tahoe, Nevada.
I passed the Qual after investigating how robots might learn and explore without
being labeled defective.
Although a large body of work already addresses how a robot might explore its environment in
order
to learn and adapt to it, the risks of exploration are commonly overlooked. Few papers addressed
how
a robot could reliably stay out of harm's way if it is left to its own devices. After I saw this
problem, my research goal was to investigate how robots could avoid committing serious errors
during their exploratory learning phase.
A step towards error free learning is made possible with a new insight about how to learn from
human feedback. Feedback interpreted as a direct label on the correctness of an action can
provide
a way to eliminate hazardous sections of the state space. This is in contrast to most previous
work in which feedback is interpreted as a reward (e.g., reward shaping), which creates
something
like a trail of breadcrumbs for coaxing an agent out of an undesirable or dangerous area. Our
new
``policy shaping'' approach to interactive machine learning called for a fundamentally new way
to
use feedback with reinforcement learning.
We ended up deriving a simple, yet rigorous, information theoretic algorithm to maximize the
information gained from human feedback, which we named Advise. Our experiments showed
Advise in some cases significantly outperformed state-of-the-art methods, and was
robust to
noise. It also eliminates the ad hoc parameter tuning common of methods that interpret
feedback as
a reward. These advancements were presented at the 1st Biennial Conference on
Reinforcement Learning and Decision Making (RLDM), where it was one of the top four papers,
and
published in the 27th Annual Conference on Neural Information Processing Systems (NeurIPS).
|
[+] |
(2012)
A Behavior-Grounded Approach to
Forming Object Categories: Separating Containers from Non-Containers.
[slides]
[videos]
Shane Griffith, Jivko Sinapov, Vlad Sukhoy, and Alex Stoytchev,
IEEE Transactions on Autonomous Mental Development (TAMD), 4:1, 54-69.
I earned my M.S. after 3.5 years of studying what a container is, and how a humanoid robot can
learn
what a container is. Although a growing body of literature in robotics addressed many different
container manipulation problems, individual papers only chipped away at isolated problems one by
one. This meant that the algorithms for one domain were not directly applicable to other
domains.
After I saw this problem, the goal of my thesis was to identify how a robot could start to learn
about containers in a more general way.
Because people have a representation of containers that is generalizable across many different
container manipulation problems, I looked to psychology for all the information on the origins
of
container learning. Psychologists observed that infants form an abstract spatial category for
containers, which allows them to apply their knowledge to novel containers. At the time,
however,
the current theories of object categorization weren't clear about exactly how infants form an
object category for containers. Consequently, I looked more deeply into the psychology
literature
in order to try and understand how infants learn.
By citing many different theories and observations from psychology, I extrapolated an
explanation
for how infants learn object categories. As a result of the expertise of the whole team, we
were
able to create a computational framework for learning object categories in a similar way by a
robot. Our experiments with containers showed that this method of object categorization really
works, and it works really well.
Our work was well received when we submitted it to the IEEE Transactions on Autonomous Mental
Development (TAMD) for publication in their journal. An eminent developmental psychologist
reviewed the object categorization theory (the expertise of
the other two reviewers was robotics), and in her "comments to the author" that we received
when
the paper was accepted, she signed her name in her review (reviews are usually anonymous) and
said:
"I commend the authors on a fantastic literature review of my domain. The authors
accurately cite a broad array of the relevant literature. There were no relevant articles
missing.
I do not have any suggested changes because I think the literature is very good as it is. ...I
was
tickled by the unification of citations from people that are
often perceived to be in opposing theoretical camps. ...I signed this review because I hope
that
the authors send me a copy when they get it published. I find the work fascinating and I would
like to refer to their in my own work."
In addition to technical comments that helped us to improve our work, the two
roboticists said "[this paper presents] an interesting and out-of-the-box way of addressing
concept
acquisition" and "this paper makes a significant contribution to the existing literature." In
the
end, my research productivity for my M.S. came to rest at
π (11 papers in 3.5 years).
|
FULL PUBLICATION LIST
Dissertation and Thesis
Journals/Conferences/Workshops
(2019)
Transforming Multiple Visual Surveys of a Natural
Environment
Into
Time-Lapses. [video]
Team: Shane Griffith, Frank Dellaert, and Cedric Pradalier.
International Journal of Robotics Research (IJRR)
(2017)
Symphony Lake Dataset.[files]
Team: Shane Griffith, Georges Chahine, and Cedric Pradalier.
International Journal of Robotics Research (IJRR), 36, 1151-1158.
(2016)
Reprojection Flow for Image Registration Across
Seasons.
Team: Shane Griffith and Cedric Pradalier.
British Machine Vision Conference (BMVC), York, UK.
(2016)
Survey Registration for Long-Term Natural
Environment
Monitoring.
Team: Shane Griffith and Cedric Pradalier.
Journal of Field Robotics
(2015)
A Spatially and Temporally Scalable Approach for
Long-Term
Lakeshore Monitoring.
Team: Shane Griffith and Cedric Pradalier.
Field and Service Robotics (FSR), Toronto, Canada.
(2015)
Robot-Enabled Lakeshore Monitoring Using
Visual SLAM
and
SIFT Flow.
Team: Shane Griffith, Frank Dellaert, and Cedric Pradalier.
RSS Workshop on Multi-View Geometry in Robotics, Rome, Italy.
(2014)
Towards Autonomous Lakeshore Monitoring.
Team: Shane Griffith, Paul Drews, and Cedric Pradalier.
International Symposium on Experimental Robotics (ISER), Marrakech, Morocco.
(2013)
Policy Shaping: Integrating Human Feedback with
Reinforcement
Learning. [appendix] [code]
Team: Shane Griffith, Kaushik Subramanian, Jon Scholz, Charles Isbell, and Andrea Thomaz.
Advances in Neural Information Processing Systems (NIPS), 2625-2633, Lake Tahoe, Nevada.
(2012)
Object Categorization in the Sink: Learning
Behavior--Grounded
Object Categories With Water
Team: Shane Griffith, Vlad Sukhoy, Todd Wegter, and Alex Stoytchev.
ICRA Workshop on SPME, St. Paul, Minnesota.
(2012)
A Behavior-Grounded Approach to
Forming
Object Categories: Separating Containers from Non-Containers.
Team: Shane Griffith, Jivko Sinapov, Vlad Sukhoy, and Alex Stoytchev,
IEEE Transactions on Autonomous Mental Development (TAMD), 4:1, 54-69.
(2011)
Using Sequences of Movement Dependency Graphs
to Form
Object
Categories.
Team: Shane Griffith, Vlad Sukhoy, and Alex Stoytchev.
Humanoids, 715-720, Bled, Slovenia.
(2011)
Toward Imitating Object Manipulation Tasks Using
Sequences
of Movement Dependency Graphs.
Team: Vlad Sukhoy, Shane Griffith, and Alex Stoytchev.
RSS Workshop on The State of Imitation Learning, Los Angeles, California.
(2011)
Interactive Object Recognition Using Proprioceptive
and
Auditory Feedback.
Team: Jivko Sinapov, Taylor Bergquist, Connor Schenck, Ugonna Ohiri, Shane Griffith, and Alex
Stoytchev
International Journal of Robotics Research (IJRR), 30:1, 1250-1262.
(2010)
Interactive Categorization of Containers and
Non-Containers by Unifying Categorizations Derived From Multiple Exploratory Behaviors.
Team: Shane Griffith and Alex Stoytchev.
Association for the Advancement of Artificial Intelligence (AAAI), Atlanta, Georgia.
(2010)
How to Separate Containers from Non-Containers?
A
Behavior-Grounded Approach to Acoustic Object Categorization.
Team: Shane Griffith, Jivko Sinapov, Vlad Sukhoy, and Alex Stoytchev.
IEEE International Conference on Robotics and Automation (ICRA), 1852-1859, Anchorage,
Alaska.
(2009)
Interactive Object Recognition Using Proprioceptive
Feedback.
Team: Taylor Bergquist, Connor Schenck, Ugonna Ohiri, Jivko Sinapov, Shane Griffith, and Alex
Stoytchev
IROS Workshop on Semantic Perception for Mobile Manipulation, St. Louis, Missouri.
(2009)
Interactive Identification of Writing Instruments and
Writable Surfaces by a robot.
Team: Ritika Sahai, Shane Griffith, and Alex Stoytchev.
RSS Workshop on Mobile Manipulation in Human Environments, Seattle, Washington.
(2009)
Toward Interactive Learning of Object Categories
by a
Robot: A Case Study with Container and Non-Container Objects.
Team: Shane Griffith, Jivko Sinapov, Matt Miller, and Alex Stoytchev.
8th IEEE International Conference on Development and Learning (ICDL), Shanghai, China.
Accepted Abstracts/Presentations
(2016)
Towards Reprojection Flow for Image
Registration
Across Seasons.
Team: Shane Griffith and Cedric Pradalier.
ICRA Workshop on AI for Long-Term Autonomy, Stockholm, Sweden.
(2013)
Policy Shaping: Integrating Human Feedback with
Reinforcement
Learning.
Team: Shane Griffith, Kaushik Subramanian, Jon Scholz, Charles Isbell, and Andrea Thomaz.
1st Multidisciplinary Conference on Reinforcement Learning and Decision Making (RLDM),
Princeton,
New Jersey.
(2009)
Toward Learning to Write by Identifying Writable
Surfaces.
Team: Ritika Sahai, Shane Griffith, and Alex Stoytchev.
8th IEEE International Conference on Development and Learning (ICDL), Poster Abstract,
Shanghai,
China.
(2009)
Learning to Detect Containers with Human
Assistance.
Team: Shane Griffith.
HRI Pioneers Workshop, Workshop Abstract, San Diego, California.
(2008)
Toward Learning to Detect and Use
Containers.
Team: Shane Griffith, Jivko Sinapov, and Alex Stoytchev.
7th IEEE International Conference on Development and Learning (ICDL), Poster Abstract,
Monterey,
California.
(2008)
Holonomic Architecture for Networked Cooperative Robots.
Team: Alex Baumgarten, John Dashner, Shane Griffith, Kyle Miller, Mark Rabe, Chris Tott, Jon Watson,
Joshua Watt, and Nicola Elia.
IEEE International Conference on Electro/Information Technology (EIT), Undergraduate Student
Paper Competition, Ames, Iowa.
(2009, 2010, 2011)
ISU ETC/WINVR
(2007, 2008)
ISU Undergraduate Research Symposium
|
|