2017 IEEE/RSJ International Conference on Intelligent Robots and Systems

9th Workshop on Planning, Perception and Navigation for Intelligent Vehicles

Full Day Workshop, Room 221-214

Registration, Program, Proceedings

September 24th, 2017, Vancouver, Canada

Contact : Professor Philippe Martinet
LS2N-CNRS Laboratory, Ecole Centrale de Nantes,
1 rue de la Noë
44321 Nantes Cedex 03, France
Phone: +33 237406975, Fax: +33 237406934,
Email: Philippe.Martinet@ec-nantes.fr,
Home page: http://www.irccyn.ec-nantes.fr/~martinet


Professor Philippe Martinet, LS2N-CNRS Laboratory, Ecole Centrale de Nantes, 1 rue de la Noë, 44321 Nantes Cedex 03, France, Phone: +33 237406975, Fax: +33 237406930, Email: Philippe.Martinet@ec-nantes.fr,
Home page: http://www.irccyn.ec-nantes.fr/~martinet

Research Director Christian Laugier, INRIA, Emotion project, INRIA Rhône-Alpes, 655 Avenue de l'Europe, 38334 Saint Ismier Cedex, France, Phone: +33 4 7661 5222, Fax : +33 4 7661 5477, Email: Christian.Laugier@inrialpes.fr,
Home page: http://emotion.inrialpes.fr/laugier

Professor Urbano Nunes, Department of Electrical and Computer Engineering of the Faculty of Sciences and Technology of University of Coimbra, 3030-290 Coimbra, Portugal, GABINETE 3A.10, Phone: +351 239 796 287, Fax: +351 239 406 672, Email: urbano@deec.uc.pt,
Home page: http://www.isr.uc.pt/~urbano

Professor Christoph Stiller, , Institut für Mess- und Regelungstechnik, Karlsruher Institut für Technologie (KIT), Engler-Bunte-Ring 21, Gebäude: 40.32, 76131 Karlsruhe, Germany, Phone: +49 721 608-42325 Fax: +49 721 661874, Email: stiller@kit edu
Home page: http://www.mrt.kit.edu/mitarbeiter_stiller.php

General Scope

The purpose of this workshop is to discuss topics related to the challenging problems of autonomous navigation and of driving assistance in open and dynamic environments. Technologies related to application fields such as unmanned outdoor vehicles or intelligent road vehicles will be considered from both the theoretical and technological point of views. Several research questions located on the cutting edge of the state of the art will be addressed. Among the many application areas that robotics is addressing, transportation of people and goods seem to be a domain that will dramatically benefit from intelligent automation. Fully automatic driving is emerging as the approach to dramatically improve efficiency while at the same time leading to the goal of zero fatalities. This workshop will address robotics technologies, which are at the very core of this major shift in the automobile paradigm. Technologies related to this area, such as autonomous outdoor vehicles, achievements, challenges and open questions would be presented.

Main Topics

  • Road scene understanding
  • Lane detection and lane keeping
  • Pedestrian and vehicle detection
  • Detection, tracking and classification
  • Feature extraction and feature selection
  • Cooperative techniques
  • Collision prediction and avoidance
  • Advanced driver assistance systems
  • Environment perception, vehicle localization and autonomous navigation
  • Real-time perception and sensor fusion
  • SLAM in dynamic environments
  • Mapping and maps for navigation
  • Real-time motion planning in dynamic environments
  • Human-Robot Interaction
  • Behavior modeling and learning
  • Robust sensor-based 3D reconstruction
  • Modeling and Control of mobile robot
  • International Program Committee

  • Alberto Broggi (VisLab, Parma University, Italy)
  • Philippe Bonnifait (Heudiasyc, UTC, France)
  • Salvador Dominguez Quijada (LS2N, Ecole Centrale de Nantes, France)
  • Zhencheng Hu, (Kumamoto University, Japan)
  • Christian Laugier (Emotion, INRIA, France)
  • Philippe Martinet (LS2N, Ecole Centrale de Nantes, France)
  • Urbano Nunes (Coimbra University, Portugal),
  • Cedric Pradalier (GeorgiaTech Lorraine, France)
  • Cyril Stachniss (AIS, University of Freiburg, Germany)
  • Christoph Stiller (Karlsruhe Institute of Technology, Germany)
  • Benoit Thuilot (Blaise Pascal University, France)
  • Rafael Toledo Moreo (Universidad Politécnica de Cartagena, Spain)
  • Sebastian Thrun (Stanford University, USA)
  • Ming Yang (SJTU Shanghai, China)
  • Dizan Vasquez (INRIA, France)
  • Young-Woo SEO (CMU, USA)
  • Wang Han (NTU, Singapore)
  • Final program

    Introduction to the workshop 9:00

    Session I: Control and planning
    Chairman: P. Martinet (LS2N, France)

    • Title: Generalized Predictive Planning for Autonomous Vehicles 9:10
      Keynote speaker: Marcelo H. Ang (NUS, Singapore) Presentation

      Abstract: We present a generalized framework for real-time predictive planning in space-time to improve autonomous driving performance in dynamic environments. Predictive planning refers to planning around predicted obstacle trajectories, where robot velocity profiles are solved in an integrated manner along with spatial paths, which is contrasted against traditional motion planning approaches which decouple the velocity and path planning problems. Autonomous vehicle deployments are still limited with respect to environmental complexity and operating speed, however the real world experimental results show that the proposed predictive planning framework can push the bounds of planning capabilities in both aspects. The planning methods are demonstrated onboard three classes of vehicles, a road car, buggy and scooter, in both unstructured pedestrian environments and on-road environments. Test scenarios include pedestrian crowd navigation, T-junction navigation, defensive driving, and overtaking.

    • Title: Driving Like a Human: Imitation Learningfor Path Planning using Convolutional Neural Networks 9:50
      Authors: Eike Rehder, Jannik Quehl and Christoph Stiller Paper, Presentation, video1, video2, video3

      Abstract:Human-like path planning is still a challenging task for automated vehicles. Imitation learning can teach these vehicles to learn planning from human demonstration. In this work, we propose to formulate the planning stage as a convolutional neural network (CNN). Thus, we can employ well established CNN techniques to learn planning from imitation. With the proposed method, we train a network for planning in complex traffic situations from both simulated and real world data. The resulting planning network exhibits human-like path generation.

    • Title: Autonomous Perpendicular And Parallel Parking Using Multi-Sensor Based Control 10:10
      Authors: David Perez-Morales, Olivier Kermorgant, Salvador Dominguez-Quijada and Philippe Martinet Paper, Presentation

      Abstract: This paper addresses the perpendicular and parallel parking problems of car-like vehicles for both forward and reverse maneuvers in one trial by improving the work presented in [1] using a multi sensor based controller with a weighted control scheme. The perception problem is discussed briefly considering a Velodyne VLP-16 and a SICK LMS151 as the sensors providing the required exteroceptive information. The results obtained from simulations and real experimentation for different parking scenarios show the validity and potential of the proposed approach.

    Coffee Break 10:30

    Session II: Segmentation and reconstruction 11:00
    Chairman: C. Laugier (INRIA, France)
    • Title: Cooperative Autonomous Driving and Interaction with Vulnerable Road Users 11:00
      Keynote speaker: Miguel Ángel Sotelo (University of Alcalá, Spain) Presentation, Video

      Abstract: Autonomous driving has become a blooming topic among car makers and research centers all across the globe in the past years since the announcement of Google’s self-driving car in 2010. The demonstration of Google’s car ability to autonomously drive on highways and urban areas changed many people’s minds in the automotive industry, creating a new cohort of what could be coined as self-driving believers. Since then, the interest of car makers in self-driving has not ceased to grow and, as a matter of fact, autonomous driving developments and publications have soared worldwide. Despite rapid technological development, a number of issues, not only legal, have still to be seriously addressed before autonomous cars can robustly, safely, and efficiently circulate and mix with manually-driven vehicles in real traffic. On the one hand, experts in the field agree that autonomous vehicles will become more robust as they develop further cooperation capabilities. In other words, cooperation with traffic infrastructure, as well as with other vehicles, will make autonomous vehicles more robust and reliable, given that it is widely accepted that standalone self-driving is by far less robust than cooperative automated driving. On the other hand, self-driving cars must have the ability to predict other traffic agents’ intentions, including other vehicles and Vulnerable Road Users (VRU), namely pedestrians and cyclists. This talk describes the design and development of DRIVERTIVE, a DRIVERless cooperaTIVE vehicle, which aims to advance cooperative automation. DRIVERTIVE competed successfully in the Grand Driving Cooperative Challenge (GCDC) in the Netherlands in 2016. Twelve international teams participated in GCDC 2016 performing a number of cooperative manoeuvers in highways and intersections. In addition, the talk provides deep insights into the interaction with Vulnerable Road Users (VRU) by means of short-term intention recognition and accurate trajectory prediction as a means to go a step further in terms of safety and reliability, since it definitely makes the difference between effective and non-effective intervention. In contrast to trajectory-based approaches, the consideration of the whole pedestrian or cyclist body language has the potential to provide early indicators of the VRU intentions, much more powerful than those provided by the physical parameters of a trajectory. Experimental results show that accurate path prediction can be achieved at a time horizon of up to 1.0 s.

    • Title: A new metric for evaluating semantic segmentation: leveraging global and contour accuracy 11:40
      Authors: Eduardo Fernandez-Moral, Denis Wolf, Renato Martins and Patrick Rives Paper, Presentation, Video

      Abstract: Semantic segmentation of images is an important problem for mobile robotics and autonomous driving because it offers basic information which can be used for complex reasoning and safe navigation. Different solutions have been proposed for this problem along the last two decades, and a relevant increment on accuracy has been achieved recently with the application of deep neural networks for image segmentation. One of the main issues when comparing different neural networks architectures is how to select an appropriate metric to evaluate their accuracy. Furthermore, commonly employed evaluation metrics can display divergent outcomes, and thus it is not clear how to rank different image segmentation solutions. This paper proposes a new metric which accounts for both global and contour accuracy in a simple formulation to overcome the weaknesses of previous metrics. We show with several examples the suitability of our approach and present a comparative analysis of several commonly used metrics for semantic segmentation together with a statistical analysis of their correlation. Several network segmentation models are used for validation with virtual and real benchmark image sequences, showing that our metric captures information of the most commonly used metrics in a single scalar value.

    • Title: Multibody reconstruction of the dynamic scene surrounding a vehicle using a wide baseline and multifocal stereo system 12:00
      Authors: Laurent Mennillo, Eric Royer , Frederic Mondoty, Johann Mousainy and Michel Dhome Paper, Presentation

      Abstract: Multibody Visual SLAM has become increasingly popular in the field of Computer Vision during the past decades. Its implementation in robotic systems can benefit numerous applications, ranging from personal assistants to military surveillance to autonomous vehicles. While several practical methods use multibody enhanced SfM techniques and monocular vision to achieve scene flow reconstruction, most rely on short baseline stereo systems. In this article, we explore the alternative case of wide baseline and multi-focal stereo vision to perform incremental multibody reconstruction, taking inspiration from the increasingly popular implementation of heterogenous camera systems in current vehicles, such as frontal and surround cameras. A new dataset acquired from such heterogeneous camera setup mounted on an experimental vehicle is introduced in this article, along with a purely geometrical method performing incremental multibody reconstruction.

    Lunch break 12:20

    Session III: Simulation - Legal Issues 13:30
    Chairman: M. Ang (NUS, Singapore)
    • Title: Technical and legal challenges for urban automated driving 13:30
      Keynote speaker: Seung-Woo Seo (SNU, South Korea) Presentation

      Abstract: Over the decade, researchers have been investigating viable technologies to realize autonomous driving in urban environments. Some key components of urban autonomous driving are the abilities to detect and track multiple objects, understand the situations, and decide optimal action policy. Most basically, autonomous vehicles in urban environments should be robust to uncertainties including unpredictable movement of moving objects, varying road situations, and uncertain traffic regulations. While there has been large progress in the technologies, there still remain many challenges that make the autonomous driving hard in urban environments. They include diverse dilemmas that we frequently encounter in daily driving. In this talk, we will discuss several key issues related to dilemma situations in urban autonomous driving. We will briefly introduce our approaches to these problems, and additionally, present some results of our research activities including the SNU Automated Drive, called SNUver.

    • Title: AutonoVi-Sim: Autonomous Vehicle Simulation Platform with Weather, Sensing, and Traffic control 14:10
      Authors: Andrew Best, Sahil Narang, Lucas Pasqualin, Daniel Barber and Dinesh Manocha Paper, Presentation

      Abstract: We present AutonoVi-Sim, a novel high-fidelity simulation platform for testing autonomous driving algorithms. AutonoVi-Sim is a collection of high-level extensible modules which allows the rapid development and testing of vehicle configurations and facilitates construction of complex road networks. Autonovi-Sim supports multiple vehicles with unique steering or acceleration limits, as well as unique tire parameters and dynamics profiles. Engineers can specify the specific vehicle sensor systems and vary time of day and weather conditions to gain insight into how conditions affect the performance of a particular algorithm. In addition, AutonoVi-Sim supports navigation for non-vehicle traffic participants such as cyclists and pedestrians, allowing engineers to specify routes for these actors, or to create scripted scenarios which place the vehicle in dangerous reactive situations. AutonoVi-Sim also facilitates data analysis, allowing for capturing video from the vehicle’s perspective and exporting sensor data such as relative positions of other traffic participants, camera data for a specific sensor, and detection and classification results. Thus, AutonoVi-Sim allows for the rapid prototyping, development and testing of autonomous driving algorithms under varying vehicle, road, traffic, and weather conditions.

    Session IV: Interactive session (1) (short overviews) 14:30
    Chairman: P. Martinet (LS2N, France)
    • Title: Lateral Controllers using Neuro-Fuzzy Systems for Automated Vehicles: A Comparative Study
      Authors: Sarouthan Sriranjan, Ray Lattarulo, Joshue Perez-Rastelli, Javier Ibanez-Guzman, Alberto Pena Paper, Presentation

      Abstract: Different implementations on automated vehicles are being introduced by researchers and manufacturers, particularly for longitudinal control. Some applications include traffic jam assistance, emergency assisted braking, Cruise Control, among others. However, lateral control applications are less common due to the complexities of the dynamic. In this paper, an Artificial Intelligence approach to control the steering wheel of an automated vehicle is presented. Two new lateral controllers are developed. One is based on human expertise (Fuzzy Logic), and the other is based on an Adaptive Network based Fuzzy Inference System (ANFIS) using expert driver data. Those controllers have been tested in a simulation environment, called Dynacar, and they were compared with a classical PID controller, giving promising results.

    • Title: Asynchronous Multi-Sensor Fusion for 3D Mapping and Localization
      Authors: Patrick Geneva, Kevin Eckenhoff, and Guoquan Huang Paper, Presentation

      Abstract: In this paper, we address the problem of 3D mapping and localization of autonomous vehicles while focusing on optimally fusing multiple heterogeneous and asynchronous sensors. To this end, based on the factor graph-based optimization framework, we design a modular sensor-fusion system that allows for efficient and accurate incorporation of any navigation sensor of different sampling rates. In particular, we develop a general method of out-of-sequence (asynchronous) measurement alignment to incorporate heterogeneous sensors into a factor graph for mapping and localization in 3D, without including all sensor poses as the graph nodes, thus allowing the graph to have an overall reduced complexity. The proposed sensor-fusion system is validated on a public dataset, in which the asynchronous-measurement alignment is shown to have an improved performance, when compared to a naive approach without alignment.

    • Title: Towards Cooperative Motion Planning for Automated Vehicles in Mixed Traffic
      Authors: Maximilian Naumann and Christoph Stiller Paper, Presentation

      Abstract: While motion planning techniques for automated vehicles in a reactive and anticipatory manner are already widely presented, approaches to cooperative motion planning are still remaining. In this paper, we present an approach to enhance common motion planning algorithms, that allows for cooperation with human-driven vehicles. Unlike previous approaches, we integrate the prediction of other traffic participants into the motion planning, such that the influence of the ego vehicle’s behavior on the other traffic participants can be taken into account. For this purpose, a new cost functional is presented, containing the cost for all relevant traffic participants in the scene. Finally, we propose a path-velocity-decomposing sampling-based implementation of our approach for selected scenarios, which is evaluated in a simulation.

    • Title: Prediction of Urban Pedestrian Behaviour using Natural Vision and Potential Fields
      Authors: Pavan Vasishta, Dominique Vaufreydaz and Anne Spalanzani Paper, Presentation

      Abstract: This paper proposes to model pedestrian behaviour in urban scenes by combining the principles of urban planning and the sociological concept of Natural Vision. This model assumes that the environment perceived by pedestrians is composed of multiple potential fields that influence their behaviour. These fields are derived from static scene elements like side-walks, cross-walks, buildings, shops entrances and dynamic obstacles like cars and buses for instance. Using this model, autonomous cars increase their level of situational awareness in the local urban space, with the ability to infer probable pedestrian paths in the scene to predict, for example, legal and illegal crossings.

    • Title: Constant Space Complexity Environment Representation for Vision-based Navigation
      Authors: Jeffrey Kane Johnson Paper, Presentation

      Abstract: This paper presents a preliminary conceptual investigation into an environment representation that has constant space complexity with respect to the camera image space. This type of representation allows a mobile agent to bypass what are often complex and noisy transformations between camera image space and Euclidean 3-space. The approach is to feed camera data directly into a potential function that maps pixel values to potential values. The resulting discrete potential field has constant space complexity with respect to the image plane. This enables planning and control algorithms, whose complexity often depends on the size of the environment representation, to be defined with constant run-time. This can be very useful for platforms with strict resource constraints, such as embedded and real-time systems.

    • Title: Relocalization under Substantial Appearance Changes using Hashing
      Authors: Olga Vysotska Cyrill Stachniss Paper, Presentation

      Abstract: Localization under appearance changes is essential for robots during long-term operation. This paper investigates the problem of place recognition in environments that undergo dramatic visual changes. Our approach builds upon previous work on graph-based image sequence matching and extends it by incorporating a hashing-based image retrieval strategy in case of localization failures or the kidnapped robot problem. We present a variant of hashing algorithm that allows for fast retrieval for high-dimensional CNN features. Our experiments suggest that our algorithm can reliably recover from localization errors by globally relocalizing the robot. At the same time, our hashing-based candidate selection is substantially faster than state-of-the-art locality sensitive hashing.

    • Title: Experimental study of the precision of a multi-map AMCL-based localization system
      Authors: Gaetan Garcia, Salvador Dominguez Quijada,Jean-Marc Blosseville, Arnaud Hamon, Xavier Koreki and Philippe Martinet Paper, Presentation

      Abstract: Autonomous navigation on the public road network, in particular in urban and semi-urban areas, requires a precise localization system, with a wide coverage and suitable for long distances. Moreover, the system must be adapted to higher speed, as compared to usual indoor mobile robots. Conventional GPS is not precise enough to satisfy the requirements. In addition, GPS suffers from signal fading and multi-path (signal reflections on nearby surfaces), very common in urban environments because of the buildings. This paper shows the methodology and statistical results of performance, over nearly 100 km, of the localization system developed at LS2N based on the classical probabilistic Monte Carlo localization, adapted for multiple maps. The environments under study are urban and suburban roads.

    • Title: PedLearn: Realtime Pedestrian Tracking, Behavior Learning, and Navigation for Autonomous Vehicles
      Authors: Aniket Bera and Dinesh Manocha Paper, Presentation

      Abstract: We present a real-time tracking algorithm for extracting the trajectory of each pedestrian in a crowd video using a combination of non-linear motion models and learning methods. These motion models are based on new collisionavoidance and local navigation algorithms that provide improved accuracy in dense settings. The resulting tracking algorithm can handle dense crowds with tens of pedestrians at realtime rates (25-30fps).We also give an overview of techniques that combine these motion models with global movement patterns and Bayesian inference to predict the future position of each pedestrian over a long time horizon. The combination of local and global features enables us to accurately predict the trajectory of each pedestrian in a dense crowd at realtime rates. We highlight the performance of the algorithm in real-world crowd videos with medium crowd density

    • Title: Safe Navigation in Dynamic, Unknown, Continuous, and Cluttered Environments
      Authors: Mike D'Arcy, Pooyan Fazli and Dan Simon Paper, Presentation

      Abstract: We introduce PROBLP, a probabilistic local planner, for safe navigation of an autonomous robot in dynamic, unknown, continuous, and cluttered environments. We combine the proposed reactive planner with an existing global planner and evaluate the hybrid in challenging simulated environments. The experiments show that our method achieves a 77% reduction in collisions over the straight-line local planner we use as a benchmark.

    Session IV: Interactive session (2)(front to poster) 15:15 - 16:00
    Chairman: P. Martinet (LS2N, France)

    Coffee break 15:30

    Session V: Perception 16:00
    Chairman: C. Laugier (INRIA, France)
    • Title: Where the Intermediate is the Big Step – Intralogistics with Safe and Scalable Fleets of Autonomously Operating Vehicles in Shared Spaces 16:00
      Keynote speaker: Achim Lithental (Örebro University, Sweeden) Presentation

      Abstract: Today, intralogistic services have to respond quickly to changing market needs, unforeseeable trends and shorter product life cycles. These drivers pose new demands on intralogistic systems to be highly flexible, rock-solid reliable, self-optimising, quickly deployable and safe, yet efficient in environments shared with humans. In this presentation I will report on ILIAD, an H2020 project that set out to enable the transition to automation of intralogistic services in the food distribution sector, where the challenges mentioned are particularly pressing. ILIAD develops robotic solutions that can integrate with current warehouse facilities, extending the state of the art to achieve self-deploying fleets of heterogeneous robots; life-long self-optimisation; manipulation from a mobile platform; efficient, legible and safe operation in environments shared with humans; and efficient fleet management with formal guarantees. I will present first results obtained regarding tracking and analysing humans; quantifying map quality; learning activity patterns inferred from long-term observations; action and intention recognition for improved human-robot interaction; and integration of task allocation, coordination and motion planning for heterogeneous robot fleets.

    • Title: Fast Image-Based Geometric Change Detection in a 3D Model 16:40
      Authors: Emanuele Palazzolo, Cyrill Paper, Presentation

      Abstract: 3D models of the environment are used in numerous robotic applications and should reflect the current state of the world. In this paper, we address the problem of quickly finding structural changes between the current state of the world and a given 3D model using a small number of images. Our approach finds inconsistencies between pairs of images by reprojecting an image onto the other by passing through the 3D model. Ambiguities about possible inconsistencies resulting from this process are resolved by combining multiple images such that the 3D location of the change can be estimated. A focus of our approach is that it can be executed fast enough to allow the operation on a mobile system. We implemented our approach in C++ and tested it on an existing dataset for change detection as well as on self recorded images sequences. Our experiments suggest that our method quickly finds changes in the geometry of a scene.

    • Title: Fast Graph-Based Place Recognition 17:00
      Authors: Mattia G. Gollub, Renaud Dubé, Hannes Sommer, Igor Gilitschenski and Roland Siegwart Paper, Presentation

      Abstract: Place recognition is a crucial capability of autonomous vehicles which is commonly approached by identifying keypoint correspondences which are geometrically consistent. This geometric verification process can be computationally expensive when working with 3D data and with increasing number of candidates and outliers. In this work, we propose a technique for performing this 3D geometric verification efficiently by taking advantage of the sparsity of the problem. Exploiting the relatively small size of the area around the vehicle, the reference map is first subdivided in partitions, and geometric verifications are only performed across relevant partitions, guaranteeing the sparseness of the resulting consistency graph. A maximum clique detection algorithm is finally applied for finding the inliers and the associated 3D transformation, taking advantage of the low degeneracy of the graph. Through experiments in urban driving scenarios, we show that our method outperforms a state of the art method both asymptotically and in practice.

    Closing 17:20
    Author Information

      Format of the paper: Papers should be prepared according to the IROS17 final camera ready format and should be 4 to 6 pages long. The detailed information on the paper format is available from the IROS17 page. http://www.iros2017.org/contributing/instructions-for-authors. Papers must be sent to Philippe Martinet by email at Philippe.Martinet@ec-nantes.fr

      Important dates (preliminary)

      • Deadline for Paper submission: (new) July 7th, 2017
      • Acceptance with review comments: July 25th, 2017
      • Deadline for final paper submission: August 20th, 12am at last, 2017

      Talk information

      • Invited talk: 40 min (35 min talk, 5 min question)
      • Other talk: 20 min (17 min talk, 3 min question)

      Interactive session

      • Interactive and open session: 1h00

    Previous workshops

      Previously, several workshops were organized in the near same field. The 1st edition PPNIV'07 of this workshop was held in Roma during ICRA'07 (around 60 attendees), the second PPNIV'08 was in Nice during IROS'08 (more than 90 registered people), the third PPNIV'09 was in Saint-Louis (around 70 attendees) during IROS'09, the fourth edition PPNIV'12 was in Vilamoura (over 95 attendees) during IROS'12, the fifth edition PPNIV'13 was in Tokyo (over 135 attendees) during IROS'13, the sixth edition PPNIV'14 was in Chicago (over 100 attendees) during IROS'14, the seventh edition PPNIV'15 was in Hamburg (over 155 attendees) during IROS'15, and the heighth edition PPNIV'16 was in Rio de Janeiro (over 95 attendees) during ITSC'16.
      In parallel, we have also organized SNODE'07 in San Diego during IROS'07 (around 80 attendees), SNODE'09 in Kobe during ICRA'09 (around 70 attendees), RITS'10 in Anchrorage during ICRA'10 (around 35 attendees), and the last one PNAVHE11 in San Francisco during the last IROS11 (around 50 attendees), and MEPC'14 was in Hong-Kong during ICRA'14 (over 60 attendees).

      Special issues have been published in IEEE Transaction on ITS (Car and ITS applications, September 2009), in IEEE-RAS Magazine (Perception and Navigation for Autonomous Vehicles, March 2014) and in ITS Magazine (Perception and Navigation for Autonomous Vehicles, March 2015). We also plan to prepare a special issue in the International Journal of Robotic Research (IJRR).


      Proceedings: The workshop proceedings will be published within the IROS Workshop/Tutorial CDROM and electronically as a pdf file.

      Special issue: Selected papers will be considered for a special issue in the International Journal of Robotic Research (IJRR). We will issue an open call, submissions will go through a separate peer review process.