NEMS 2015 @ Northeastern

New England Manipulation Symposium



Schedule


9:00 -- 9:15 Registration

9:15 -- 9:20 Welcome and Introduction -- Robert Platt (108 West Village H)

9:20 -- 10:20 Session 1: Grasp Perception and Synthesis (108 West Village H)
9:20 -- 9:40 Generating Multi-Fingered Robotic Grasps via Deep Learning, Jake Varley, Jared Weiss, Jon Weisz, and Peter Allen, Columbia University
9:40 -- 10:00 Synthesis and Optimization of Force Closure Grasps via Sequential Semidefinite Programming, Hongkai Dai, Anirudha Majumdar, and Russ Tedrake, Massachusetts Institute of Technology
10:00 -- 10:20 Using Geometry to Detect Grasps in 3D Point Clouds, Rob Platt, Andreas ten Pas, Northeastern University
10:20 -- 10:40 Contact sensing based error detection in data-driven grasping, Qian Wan, Robert H. Howe, Harvard University

10:40 -- 11:20 Coffee, Posters, Demos, and Lab Tours (108, 208 West Village H)

11:20 -- 12:20 Session 2: Multirobot Systems (108 West Village H)
11:20 -- 11:40 Collaborative Manipulation of Flexible Materials, Daniel Kruse, Richard J. Radke, and John Wen, Rensselaer Polytechnic Institute
11:40 -- 12:00 Grasp Planning for Multi-Robot Assembly, Mehmet Dogar, Andrew Spielberg, Stuart Baker, and Daniela Rus, Massachusetts Institute of Technology
12:00 -- 12:20 Autonomous Door Opening and Traversal, Benjamin Axelrod and Wesley H. Huang, iRobot Corporation

12:20 -- 1:20 Lunch (336 West Village H)

1:20 -- 2:40 Session 3: State Estimation in Manipulation (108 West Village H)
1:20 -- 1:40 Identifiability of Rigid Bodies Undergoing Frictional Contact Nima Fazeli, Russ Tedrake, and Alberto Rodriguez, Massachusetts Institute of Technology
1:40 -- 2:00 Object Sensing and Classification with Tactile Sensing and Underactuated Hands, Adam Spiers, Yale University
2:00 -- 2:20 State Estimation for Dynamic Systems with Intermittent Contact, Shuai Li, Rensselaer Polytechnic Institute
2:20 -- 2:40 Axiomatic Scene Estimation for Robotic Manipulation, Zhiqiang Sui, Odest Chadwicke Jenkins, and Karthik Desingh, Brown University

2:40 -- 3:40 Coffee, Posters, Demos, and Lab Tours (108, 208 West Village H)

3:40 -- 4:40 Session 4: Manipulation (108 West Village H)
3:40 -- 4:00 Careful Control of Dynamic Objects, CJ Hasson, Northeastern University
4:00 -- 4:20 Robots that Change the World: Hard Instances of Object Rearrangement, Athanasios Krontiris and Kostas Bekris, Rutgers University
4:20 -- 4:40 Prehensile Pushing: In-hand Manipulation with Push-Primitives, Nikhil Chavan-Dafle and Alberto Rodriguez, Massachusetts Institute of Technology

4:40 -- 4:45 Closing Remarks -- Robert Platt (108 West Village H)

Presentations

Generating Multi-Fingered Robotic Grasps via Deep Learning, Jake Varley, Jared Weiss, Jon Weisz, and Peter Allen, Columbia University
This talk presents a deep learning architecture for detecting the palm and fingertip positions of stable grasps directly from partial object views. The architecture is trained using RGBD image patches of fingertip and palm positions from grasps computed on complete object models using a grasping simulator. At runtime, the architecture is able to estimate grasp quality metrics such as force closure without the need to explicitly calculate the given metric. This ability is useful as the exact calculation of these quality functions is impossible from an incomplete view of a novel object without any tactile feed-back. This architecture for grasp quality prediction provides a framework for generalizing grasp experience from known to novel objects.

Prehensile Pushing: In-hand Manipulation with Push-Primitives, Nikhil Chavan-Dafle and Alberto Rodriguez, Massachusetts Institute of Technology
We explored the manipulation of a grasped object by pushing it against its environment. By exploiting controlled slip, a prehensile push forces a grasped object to move between stable grasps. Relying only on precise arm motions and detailed models of frictional contact, prehensile pushing enables dexterous manipulation with simple manipulators, such as those currently available in industrial settings, and those likely affordable by service robots. In this presentation we focus on explaining the mechanics of the interaction between the gripper, the object and the environment. In particular, we consider a quasi-dynamic motion of an object held by a set of point, line, or planar rigid frictional contacts and pushed by an external pusher (the environment). Our model predicts the force required by the external pusher to "break" the equilibrium of the grasp and estimates the instantaneous motion of the object. It also captures interesting behaviors such as the constraining effect of line or planar contacts and the guiding effect of the pusher motion on the motion of the object. We evaluate the algorithm with the analysis of three primitive prehensile pushing actions--straight sliding, pivoting, and rolling--which can be thought of as building blocks of a broader in-hand manipulation capability.

Synthesis and Optimization of Force Closure Grasps via Sequential Semidefinite Programming, Hongkai Dai, Anirudha Majumdar, and Russ Tedrake, Massachusetts Institute of Technology
In this talk we present a novel approach for synthesizing and optimizing both positions and forces in force closure grasps. This problem is a non-convex optimization problem in general since it involves constraints that are bilinear; in particular, computing wrenches involves a bilinear product between grasp contact points and contact forces. Thus, conventional approaches to this problem typically employ general purpose gradient-based nonlinear optimization. The key observation of this paper is that the force closure grasp synthesis problem can be posed as a Bilinear Matrix Inequality (BMI), for which there exist efficient solution techniques based on semidefinite programming. We show that we can synthesize force closure grasps on different geometric objects, and by maximizing a lower bound of a grasp metric, we can improve the quality of the grasp. While this approach does not guarantee that a solution will always be found, it has a few distinct advantages. First, we can handle non-smooth constraints, which can often be important. Second, in contrast to gradient-based approaches we can prove infeasibility of problems. We demonstrate our method on a 15 joint robot model grasping objects with various geometries.

Autonomous Door Opening and Traversal, Benjamin Axelrod and Wesley H. Huang, iRobot Corporation
In order to access many spaces in human environments, mobile robots need to be adept at using doors: opening the door, traversing (i.e., passing through) the doorway, and possibly closing the door afterwards. The challenges in these problems vary with the type of door (push-/pull-doors, self-closing mechanisms, etc.) and type of door handle (knob, lever, crashbar, etc.) In addition, the capabilities and limitations of the robot can have a strong effect on the techniques and strategies needed for these tasks. We have developed a system that autonomously opens and traverses push- and pull-doors, with or without self-closing mechanisms, with knobs or levers, using an iRobot 510 PackBot® (a nonholonomic mobile base with a 5 degree-of-freedom arm) and a custom gripper with a passive 2 degree-of-freedom wrist. To the best of our knowledge, our system is the first to demonstrate autonomous door opening and traversal on the most challenging combination of a pull-door with a self-closing mechanism. In this paper, we describe the operation of our system and the results of our experimental testing.

Identifiability of Rigid Bodies Undergoing Frictional Contact Nima Fazeli, Russ Tedrake, and Alberto Rodriguez, Massachusetts Institute of Technology
We are interested in addressing the identifiability of the inertial parameters and the contact forces associated with the observation of an object making and breaking frictional contact with the environment. We assume that both object and environment are rigid, and exploit the structure of the complementarity formulation for contact resolution to establish closed form relationship between inertial parameters, contact forces, and observed motions. We study the cases of passive interaction, where the object collides with the environment under a gravitational field, and active interaction, where external but known perturbations act on the object at the moment of contact. Our identifiability analysis indicates that without the application of known external forces, the identifiable set of parameters remains coupled, i.e., the ratio of mass moment of inertia to mass and the ratio of contact forces to the mass. The analysis also shows that with known external forces the mass and mass moment of inertia can actually be deconvoluted, leading to identifiability of mass, mass moment of inertia, and normal and tangential contact forces. We evaluate the identifiability formulation with three examples: a 2D bar, 2D block and 2D ellipse.

Object Sensing and Classification with Tactile Sensing and Underactuated Hands, Adam Spiers, Yale University
Underacted robot hands provide the ability to reliably grasp a wide variety of objects with model-free, open-loop control methods via mechanical adaptability. This makes such robust and low-cost grippers applicable to a wide variety of real world manipulation tasks. Aside from grasping, another use of hands in biological systems is the use of haptic feedback to determine object properties, such as size and shape. Though a variety of groups have proposed integration of tactile sensors into robotic hands for this purpose, the approaches often involve dense sensor arrays or lengthy exploratory motions. In this work we apply machine learning and parametric modelling methods to classify and measure household objects during a single open-loop grasp performed by an underactuated hand equipped with 16 TakkTile pressure sensors. The acquisition of haptic data and subsequent processing does not influence motor control and so may be carried out during standard open loop, adaptive grasps. The method is robust to perturbations in object position and orientation within the grasp, permitting high classification accuracy and object dimension estimation.

Axiomatic Scene Estimation for Robotic Manipulation, Zhiqiang Sui, Odest Chadwicke Jenkins, and Karthik Desingh, Brown University
The main challenge for robots to do complex manipulation tasks for human is that many aspects of the human's world model are difficult or impossible for the robot to sense directly. We posit the critical missing component is the grounding of symbols that conceptually tie together low-level perception and high-level reasoning for extended goal-directed autonomy. In this paper, we propose an approach to axiomatic state estimation, the Axiomatic Particle Filter (APF) to perform anchoring and manipulation in cluttered scenes containing objects in physical contact. We describe the problem of axiomatic state estimation as the perception of a robot's environment axiomatically as a scene graph, where each object in the scene is a node and inter-object relations are edges. The Axiomatic Particle Filter is then presented as an implementation that estimates the pose and spatial relations of simple objects (blocks) in a scene from 3D point clouds, assuming known object geometries. We present results from experiments of axiomatic particle filtering within a sequential manipulation for controlled scenes involving objects that are stacked and partially occluded.

Collaborative Manipulation of Flexible Materials, Daniel Kruse, Richard J. Radke, and John Wen, Rensselaer Polytechnic Institute
Robotic manipulation of highly deformable materials is inherently challenging due to the need to maintain tension and the high dimensionality of the state of the material. Past work in this area mostly focuses on generating a detailed model for the material and its interaction with the robot, then using the model to construct a motion plan. In this paper, we take a different approach by using only sensor feedback to dictate the robot motion. We consider the collaborative manipulation of a deformable sheet between a person and a dual-armed robot (Baxter by Rethink Robotics). The robot is capable of contact sensing via joint torque sensors and is equipped with a head-mounted RGBd sensor. The robot senses contact force to maintain tension of the sheet, and in turn comply to the human motion. This is akin to handling a tablecloth with a partner but with one's eyes closed. To improve the response, we use the RGBd sensor to detect folds, and command the robot to move in an orthogonal direction to smooth them out. This is like handling cloth by looking at the cloth itself. Both controllers are able to follow human motion without excessive crimps in the sheet, but as expected, the hybrid controller combining force and vision outperforms the force controller alone in terms of tension force transient. The ability to quickly detect the state of the deformable material also enables more complex manipulation strategies in the future.

State Estimation for Dynamic Systems with Intermittent Contact, Shuai Li, Rensselaer Polytechnic Institute
Dynamic system states estimation, such as object pose and contact states estimation, is essential for robots to perform manipulation tasks. In order to make accurate estimation, the state transition model needs to be physically correct. Complementarity formulations of the dynamics are widely used for describing rigid body physical behaviors in the simulation field, which makes it a good state transition model for dynamic system states estimation problem. However, the non-smoothness of complementarity models and the high dimensionality of the dynamic system make the estimation problem challenging. In this work, we propose a particle filtering framework that solves the estimation problem by sampling the discrete contact states using contact graphs and collision detection algorithms, and by estimating the continuous states through a Kalman filter. This method exploits the piecewise continuous property of complementarity problems and reduces the dimension of the sampling space compared with sampling the high dimensional continuous states space. We demonstrate that this method makes stable and reliable estimation in physical experiments.

Robots that Change the World: Hard Instances of Object Rearrangement, Athanasios Krontiris and Kostas Bekris, Rutgers University
An important skill for robot manipulators is the effective rearrangement of multiple objects so as to deal with clutter in human spaces. Rearrangement is a challenging problem as it involves combinatorially large, continuous C-spaces for multiple movable bodies with challenging kinematic constraints. This talk will describe simple but general primitives for rearranging multiple objects and their use within task planning frameworks. A previously proposed search-based primitive can quickly compute solutions for monotone challenges, i.e., when objects need to be grasped only once. This method is extended to also work on a subset of non-monotone cases. Both of these primitives are used as local planners within a higher-level task planner that searches the space of object placements and for which completeness guarantees can be provided. Experimental results using a model of a Baxter robot arm indicate the benefit of using more powerful motion primitives in the context of task planning for object rearrangement.

Careful Control of Dynamic Objects, CJ Hasson, Northeastern University
TBD.

Using Geometry to Detect Grasps in 3D Point Clouds, Rob Platt, Northeastern University
TBD.

Grasp Planning for Multi-Robot Assembly, Mehmet Dogar, Andrew Spielberg, Stuart Baker, and Daniela Rus, Massachusetts Institute of Technology
The next generation of manufacturing systems will include multi-robot teams working together to assemble, transport, and inspect complex objects. These robots will use efficient planning algorithms which scale nicely with problems of increasing size and complexity. In this talk I will present our research on planning multi-robot collaborative assembly operations. I will formulate the question as a constraint satisfaction problem, show that naive solutions are computationally intractable for complex assembly structures, and present a planning algorithm which uses "re-grasping" actions to divide the constraint graph into smaller components. The algorithm enables robots to build structures which are otherwise infeasible to plan for.

Posters

Human-Robot Interface for Formal Safety Controller Synthesis for Multi-Link Robots with Elastic Joints, Sayan Saha, Bilal Salam, Andrew K. Winn, and A. Agung Julius, Rensselaer Polytechnic Institute
With the increasing interest in sharing the work space between humans and robots, robots with elastic joints is playing a pivotal role in making human-robot interaction more safe. Such robot arms are receiving a lot of interest also because even though industrial robots are viewed to be composed of rigid bodies, essentially the joints are somewhat flexible due to presence of transmission elements. Furthermore, robots with flexible links can be approximated by multi-link robots with elastic joints, making controller design for the former relatively easy. In this work, we discuss the problem of synthesizing provably correct controller for motion control of such robots in the presence of obstacles in the work space, exploiting the fact that the dynamics of such robots is feedback linearizable. For robots with many links, such task is difficult because the configuration space is high-dimensional. For example, for a robot with N links, the dimension of the configuration space is 2N, and the dimension of the state-space is 4N. Based on our previous results on trajectory-based formal controller synthesis for nonlinear systems, we can demonstrate that a provably correct controller for the robot can be obtained by using finitely many samples of valid execution trajectories, obtained from a human, using a human-robot interface based on the Leap Motion Controller. The motivation of this work is to develop a human robot interface that allows a human operator to manipulate a robot arm in its task-space, without physically being present in the vicinity of the robot. We mainly focus on manipulation of the robot arm in this paper and to that end using the interface the operator provides some set-points to follow such that the desired task can be accomplished, by making only specific hand-gestures, that are recognized by the Leap Motion Controller. Once such inputs are obtained, we mathematically formulate a controller that ensures the robot arm executes the manipulation task at hand correctly, by following the trajectory generated by the operator given set-points.

Design and Control of a Robotic System for Microsurgery, Alperen Degirmenci, Conor J. Walsh, Frank L. Hammond III, Robert J. Wood, Joshua B. Gafford, and Robert D. Howe, Harvard University and Massachusetts Institute of Technology
In this presentation, we will talk about the design and control of a teleoperated robotic system for dexterous micromanipulation tasks at the meso-scale, specifically open microsurgery. Robotic open microsurgery is an unexplored yet potentially a high impact area of surgical robotics. Microsurgical operations, such as microanastomosis of blood vessels and reattachment of nerve fibers, require high levels of manual dexterity and accuracy that surpass human capabilities. We have designed and built a 3 degree-of-freedom (DoF) robotic wrist based on a spherical five-bar mechanism. The wrist is attached to a 3-axis commercial off-the-shelf linear stage, achieving a fully dexterous system. Design requirements were determined using motion data collected during a simulated microanastomosis operation. We optimized the wrist design to maximize its workspace and manipulability. The system is teleoperated using a haptic device, and has the required bandwidth to replicate microsurgical motions. We built graspers with integrated force sensing capabilities using the 'pop-up MEMS' technology, and integrated it with the robotic system. To demonstrate system capabilities, we successfully stacked fourteen 1 mm diameter metal spheres into a pyramid via teleoperation. The micromanipulation system presented here may improve surgical outcomes during open microsurgery by offering better accuracy and dexterity to surgeons.

Point Cloud Grasp Planning using Automated Sphere Decomposition, Brayden Hollis, Shuai Li, Stacy Patterson, and Jeff Trinkle, Rensselaer Polytechnic Institute
Efficient grasp planning is critical for robots in unstructured environments. This is a challenging problem for arbitrary objects as the complexity of the shape offers a large search space to find feasible grasps. In order to reduce the search space and thus achieve online grasps planning for arbitrary objects, we convert the object's point cloud representation to an union of spheres representation through clustering and least-squares sphere fitting. This union of spheres representation is used to generate and test an initial set of possible grasp plans using GraspIt! primitive planner. The resulting set of plans are further evaluated against the point cloud representation and scene data with collision and clearance checkers within GraspIt!. This returns a list of feasible grasp plans sorted using robustness and obstacle avoidance metrics.

Object Manipulation with Learning and Reachability Database, Jun Dong and Jeffrey C. Trinkle, Rensselaer Polytechnic Institute
Our work solves the object manipulation problem in the described two-step approach. The first step teaches robots what are good trajectories to accomplish tasks and the second step enables robots to plan placements and joint movements according to the task trajectories. By combining the strength of humans and robots, we are able to make robots more intelligent and autonomous. Human's knowledge about manipulating objects can be transferred to robots through demonstration, while robots are also proved to be better in understanding their own structures and capabilities.

Contact Sensing Based Corrective Actions in Data-Driven Grasping, Qian Wan and Robert H. Howe, Harvard University
Grasping is ultimately a contact activity. The interactions between the hand and object can bear crucial information regarding the stability of the grasp, though these interactions may not be easily discernible visually. Therefore it is important to further our understanding of how sensors internal to the hand correlate to grasping activities. We would like to present an experiment where we used learning algorithms on contact and proprioceptive sensing data collected during hundreds of real grasps. The algorithm enables the robot to predict grasp success, recognize the category of error that is causing the grasp instability, and select a corrective action that can compensate for that error. Therefore for in each grasp attempt, the system will only lift objects off the table when the grasp is predicted to succeed, and make local corrections until it is successful if the first attempt is not.

An Assistive Mobility Device with Automated Grasping,Abraham Shultz, Holly Yanco, Andreas ten Pas, and Rob Platt, UMass Lowell and Northeastern University
TBD.

A Framework for Unsupervised Online Human Reaching Motion Recognition and Early Prediction, Ruikun Luo and Dmitry Berenson, Worcester Polytechnic Institute
This paper focuses on recognition and prediction of human reaching motion in industrial manipulation tasks. Several supervised learning methods have been proposed for this purpose, but we seek a method that can build models on-the-fly and adapt to new people and new motion styles as they emerge. Thus, unlike previous work, we propose an unsupervised online learning approach to the problem, which requires no offline training or manual categorization of trajectories. Our approach consists of a two-layer library of Gaussian Mixture Models that can be used both for recognition and prediction. We do not assume that the number of motion classes is known a priori, and thus the library grows if it cannot explain a new observed trajectory. Given an observed portion of a trajectory, the framework can predict the remainder of the trajectory by first determining what GMM it belongs to, and then using Gaussian Mixture Regression to predict the remainder of the trajectory. We tested our method on motioncapture data recorded during assembly tasks. Our results suggest that the proposed framework outperforms supervised methods in terms of both recognition and prediction. We also show the benefit of using our two-layer framework over simpler approaches.

TBD., Antonio Rafael V. Umali, Worcester Polytechnic Institute
Compliant and cost-effective personal and industrial robots have allowed for more opportunities for Human-Robot Collaboration (HRC). In this work, we present a framework for using the Baxter robot in aiding health-care workers in the removal of biologically-contaminated clothing and equipment. Wearing this protective equipment is difficult for humans because it traps body heat and limits dexterity. Thus our approach seeks to minimize risk while also taking the least amount of time possible to execute. We use Baxter as a semi-autonomous assistant instead of a fully autonomous robot and take advantage of its compliant actuation. We present a systematic way of decomposing high-risk tasks into a series of human-robot actions which reduce the human's risk of exposure to infection. We also present a few key modifications to the Baxter robot which are likely to significantly improve its effectiveness for such tasks.

A Robotic Grasping System With Bandit-Based Adaptation, John Oberlin and Stefanie Tellex, Brown University
A key aim of current research is to create robots that can reliably manipulate objects. However, in many applications, general-purpose object detection or manipulation is not required: the robot would be useful if it could recognize, localize, and manipulate the relatively small set of specific objects most important in that application, but do so with very high reliability. Instance-based approaches can achieve this high reliability but to work well, they require large amounts of data about the objects that are being manipulated. The first contribution of this paper is a system that automates this data collection and adaptation process. When the robot encounters a novel object, it automatically collects instance based models for detecting the object, estimating its pose, and grasping the object. This approach achieves the generality of category-based methods with the reliability of instance-based methods, at the cost of the robot's time collecting data for the object models. We demonstrate that our approach enables an unmodified Baxter robot to autonomously acquire models for a wide variety of objects and then detect the objects on the tabletop and respond to pick-and-place requests. The baseline system performs well in many instances, but some objects are "stealthy" with respect to our sensors (i.e. don't scan well) and for other objects, information needed to infer a successful grasp is latent in the environment and arises from complicated physical dynamics; for example, a heavy object might need to be grasped in the middle or else it will twist out of the robot's gripper. The second contribution of this paper is a formalization of object picking as an N-armed bandit problem, where each potential grasp point corresponds to an arm with unknown pick success rate. This formalization enables us to apply bandit-based exploration algorithms to enable a robot to identify the best grasp point by attempting to pick up the object and tracking its successes and failures. Because the number of grasp points is very large, we define a new algorithm for best arm identification in budgeted bandits that computes confidence bounds while incorporating prior information, enabling the robot to quickly find a near optimal arm without pulling all the arms. We demonstrate that our adaptation step significantly improves accuracy over a non-adaptive system, enabling a robot to improve grasping models through experience.

Demos

ReFlex SF Hand and TakkTile Sensors, Leif Jentoft, RightHand Robotics

Grasping, Andreas ten Pas, Rob Platt, and Marcus Gualtieri, Northeastern University

Tactile Sensing, Mordechai Rynderman, Northeastern University