Pedestrians on the Roadway: A Workshop on Autonomous Vehicles Encountering Pedestrians

Page 1

Pedestrians on the Roadway: A Workshop on Autonomous Vehicles Encountering Problems

November 30, 2020


A G E N D A F O R T O D AY ’ S W E B I N A R 12:30 pm – 12:35 p.m.: 12:35 pm - 12:40 pm:

Welcome Introduction to the workshop by Umit Ozguner, PhD, The Ohio State University

Session 1 (12:40 pm - 14:00 pm): Modeling vehicle-pedestrian interaction 12:40 pm - 13:10 pm: “Modeling and simulating collective pedestrian motion influenced by vehicle” Dongfang Yang, The Ohio State University 13:10 pm - 13:30 pm: “Modeling interaction of pedestrians and cars in shared spaces: using game theory approach” Fatema Tuj Johora, TU Clausthal, Germany 13:30 pm - 13:50 pm: “Detecting and stopping for occluded and emergent pedestrians” Mert Koc, The Ohio State University 13:50 pm - 14:10 pm: Break Session 2 (14:10 pm - 15:20 pm): Sensing the pedestrians 14:10 pm - 14:40 pm: “The development of pedestrian and bicyclist surrogates” Chi-Chih Chen, PhD, The Ohio State University 14:40 pm - 15:00 pm: “Fusing vision and pointcloud for faraway pedestrian detection” Haolin Zhang, The Ohio State University 15:00 pm - 15:20 pm: “Trajectory dataset of vehicle-pedestrian interaction in shared spaces” Dongfang Yang 15:20 pm - 15:30 pm: General discussion with the audience Sessions chaired by Dr. Ekim Yurtsever, The Ohio State University


INTRODUCTION

Umit Ozguner, PhD Professor Emeritus TRC Inc. Chair on ITS Electrical and Computer Engineering The Ohio State University


Pedestrians on the Roadway Prof. Emeritus Umit Ozguner Center for Automotive Research (CAR) and Department of ECE

November 30, 2020


Motivation • Pedestrian Crowds: • Slow people movers in pedestrian areas. • Cars in shared spaces

• Pedestrians crossing Roadway: • Emerging occluded pedestrians • Intent of pedestrians at the side • Far away pedestrians


Thanks

• DoT UTC Program support through CMU • Prof. Keith Redmill • Collaborating Universities • Nagoya University, Japan • TU Clausthal, Germany • Dalian Univ. of Technology, China • OSU CAR and CAR staff for organization • Interested in our research? Contact: ozguner.1@osu.edu

3


SESSION CHAIR

Ekim Yurtsever, PhD Research Associate Engineer Center for Automotive Research The Ohio State University


Session 1: Modeling vehicle-pedestrian interaction


S E S S I O N 1 – P R E S E N TAT I O N # 1

“Modeling and simulating collective pedestrian motion influenced by vehicle” Dongfang Yang PhD Candidate Center for Automotive Research The Ohio State University


Modeling and Simulating Collective Pedestrian Motion Influenced by Vehicle Developing Sub-Goal Social Force Model for Pedestrian motion

Dongfang Yang, Ph.D. Candidate Center for Automotive Research (CAR) – CAR West Department of Electrical and Computer Engineering The Ohio State University Nov 30, 2020


Motivation • This work focuses on pedestrian motion modeling • Motivation 1: improving urban planning • Motivation 2: benefiting autonomous driving in pedestrian scenarios

• Example scenarios:

A well-designed shared street in Brighton, UK, allowing many types of road users to coexist. Image source: https://www.pps.org/article/what-isshared-space

A highly mixed shared plaza in Graz, Austria. Image source: https://www.cnu.org/publicsquare/shared-spaceintersections-mean-less-delay

Shibuya Crossing, Tokyo, Japan


Overview • Proposed sub-goal social force model (SG-SFM) for pedestrian motion • • • •

Interaction with pedestrians Interaction with vehicles (new vehicle influence design) Generic model – applicable to various scenarios Comprehensive evaluation – 3 datasets (quantitative), fundamental interaction scenarios (qualitative)

• The work is currently under review, will be released soon. • Check the video: https://youtu.be/rSLFmsy8FNw Our previous related works: [1] Yang, Dongfang, Ümit Özgüner, and Keith Redmill. "Social force based microscopic modeling of vehicle-crowd interaction." In 2018 IEEE Intelligent Vehicles Symposium (IV), pp. 1537-1542. IEEE, 2018. [2] Yang, Dongfang, Ümit Özgüner, and Keith Redmill. "A Social Force Based Pedestrian Motion Model Considering Multi-Pedestrian Interaction with a Vehicle." ACM Transactions on Spatial Algorithms and Systems (TSAS) 6, no. 2 (2020): 1-27. [3] Yang, Dongfang, Keith Redmill, and Ümit Özgüner. "A Multi-State Social Force Based Framework for Vehicle-Pedestrian Interaction in Uncontrolled Pedestrian Crossing Scenarios." accepted by 2020 IEEE Intelligent Vehicles Symposium (IV), Las Vegas, NV, United States, 2020. 3


Thanks! Further Question, Please Contact: yang.3455@osu.edu


S E S S I O N 1 – P R E S E N TAT I O N # 2

“Modeling interaction of pedestrians and cars in shared spaces using game theory” Fatema Tuj Johora PhD Candidate TU Clausthal, Germany


Modeling interaction of pedestrians and cars in shared spaces using game theory M.Sc. Fatema Tuj Johora Ph.D. Candidate Department of Informatics, TU Clausthal, Germany Supervisor: Prof. JĂśrg P. MĂźller


Background: Shared Space

A shared space in Sonnenfelsplatz, Graz, Austria (https://goo.gl/sW1D8V)

• • • •

A shared space in Hamburg, Germany ( from MODIS (Multi mODal Intersection Simulation) project )

Multimodal road users coexist, e.g., pedestrians and cars No explicit traffic regulations No clear priorities among road users Interaction is handled by social protocols and informal rules

30.11.2020 | Modeling interaction of pedestrians and cars in shared spaces: using game-theoretic approach | Fatema Tuj Johora

2/20


Motivation • • •

Increasing use of shared space design principle Traffic performance estimation of shared spaces is critical Good simulation models are needed – also for autonomous cars

Previous works on modeling and simulating shared spaces are: § Based on the social force model, SFM (Helbing et al., 1995) - long-range conflict avoidance mechanisms (Pascucci et al., 2015; 2017) -

rule-based constraints (Anvari et al., 2015) multinomial logit model (Pascucci et al., 2018) game-theoretic approach (Schönauer et al., 2012) new forces (Yang et al., 2018; 2020; Zeng et al., 2014)

30.11.2020 | Modeling interaction of pedestrians and cars in shared spaces: using game-theoretic approach | Fatema Tuj Johora

3/20


Contribution 1. Propose a novel motion model of pedestrians and cars • • •

large variety of interaction scenarios, e.g., multiple user interactions, interdependent interactions and group-to-car interactions interaction recognition depending on the situation context transferable to different environment settings Game Theory

Social Force Model

Inspired by Schönauer et al., 2012

Game-Theoretic Social Force Model (GSFM)

2. Propose a hybrid model of GSFM and a deep learning model 3. Propose a method for recognition and modeling of different behavioral patterns of pedestrians

30.11.2020 | Modeling interaction of pedestrians and cars in shared spaces: using game-theoretic approach | Fatema Tuj Johora

4/20


Methodology 1. Detect, analyze and classify typical interactions through video observations and literature review § Current work only consider pedestrians and cars

2. Model and simulate the motion of road users §

Adapt, extend the SFM with a game-theoretic model

3. Calibrate the model 4. Evaluate the model § § § §

Evaluation of realistic trajectory modeling Comparison with a deep learning model Evaluation of model transferability Evaluation of traffic performance

30.11.2020 | Modeling interaction of pedestrians and cars in shared spaces: using game-theoretic approach | Fatema Tuj Johora

5/20


Type of Interactions • Simple interaction: percept à act o Reactive interaction o Following (car only)

ue contin

• Complex interaction: percept à

Ø Pedestrian-to-pedestrian

te a i v de

choose strategy

accelerate

à act

Multiple conflict (MC) Multiple user conflict (MUC)

Ø Car-to-car Ø Pedestrian(s)-to-cars Ø Pedestrian(s)-to-car

Pedestrian-to-car drivers interactions at the crosswalk 30.11.2020 | Modeling interaction of pedestrians and cars in shared spaces: using game-theoretic approach | Fatema Tuj Johora

6/20


Zone- Specific Behavior

C3

Based on the environment context: • • • •

Road users’ interaction differ Conflict handling differs Conflict recognition and classification differs Trajectories of road users differ

Analyse and model zone-specific behaviors of pedestrians and cars Johora, F. T., & Müller, J. P (2020, Jan).

30.11.2020 | Modeling interaction of pedestrians and cars in shared spaces: using game-theoretic approach | Fatema Tuj Johora

7/20


Motion Model: GSFM

Trajectory planning: Calculate the shortest paths of road users using A* Algorithm by only considering the static obstacles

Legend: AF is Additional Force, SFM is Social Force Model 30.11.2020 | Modeling interaction of pedestrians and cars in shared spaces: using game-theoretic approach | Fatema Tuj Johora

8/20


Forced-Based Modeling: Extended SFM

A group is coherent if the leader can see the last member within the field of view range: đ?‘‘đ?‘–đ?‘ đ?‘Ą(đ?‘Ľđ?‘–đ?‘—, đ?‘Ľđ?‘–đ?‘—′) = đ?‘‘đ?‘ đ?‘œđ?‘?đ?‘–đ?‘Žđ?‘™

Extended model for vehicle: Ahmed, S., Johora, F. T., & MĂźller, J. P. (2019, May).

30.11.2020 | Modeling interaction of pedestrians and cars in shared spaces: using game-theoretic approach | Fatema Tuj Johora

9/20


Game-Theoretic Decision-Making Stackelberg games, sequential leader-follower games: • Leader plays by backward induction by anticipating follower’s behavior • Followers optimize their utility considering leader’s action er ad Le

C = Car P = Pedestrian Con

-100, -100

Leader Con

C

P

Follower

Dec 2, 0

Dec

Con

Dev 2, 1

0, 2

Con = Continue Dec = Decelerate Dev = Deviate

P Dec 0, 0

Dev 0, 0

Single leader but one or more followers 30.11.2020 | Modeling interaction of pedestrians and cars in shared spaces: using game-theoretic approach | Fatema Tuj Johora

10/20


Game-Theoretic Decision-Making • Leader selection: - Pedestrian(s)-to-car interaction: The faster user (car) - Pedestrian(s)-to-cars and car-to-car interactions: The car that recognizes conflict first • Payoff estimation: § Ordinal valuations of outcomes § Impacts of situation dynamics: 1. 2. 3. 4. 5. 6. 7. 8.

Strategy pair

CAR

PED

0

Con

Con

Con

Dec

Con

Dev

Dec

Con

Dec

Dec

Dec

Dev

Current active behavior 1 Current speed of target and competitive user 2 Number of active interactions 3 Interaction angle 4 Car following active or not 5 Vehicle able to stop or not Zone type Number of give ways or uncertainty in driving behavior

30.11.2020 | Modeling interaction of pedestrians and cars in shared spaces: using game-theoretic approach | Fatema Tuj Johora

11/20


Motion Model : Summary Trajectory Module Start

Let‘s assume, Car: decelerate Pedestrian: continue

Game Module

Force Module Start

End Co nf lic t

End

Motion model of pedestrian: Motion model of vehicle: 30.11.2020 | Modeling interaction of pedestrians and cars in shared spaces: using game-theoretic approach | Fatema Tuj Johora

12/20


Calibration Simulation-based calibration of model parameters: using a genetic algorithm

SFM and safety parameters Difference of simulated and observed trajectories used as fitness

Game parameters Difference of simulated (game) and observed decisions used as fitness Here, E, U, and T are the number of scenarios, agents, and time steps, respectively 30.11.2020 | Modeling interaction of pedestrians and cars in shared spaces: using game-theoretic approach | Fatema Tuj Johora

13/20


Qualitative Analysis: Multiple Conflicts

30.11.2020 | Modeling interaction of pedestrians and cars in shared spaces: using game-theoretic approach | Fatema Tuj Johora

14/20


Qualitative Analysis: Multiple User Conflicts

30.11.2020 | Modeling interaction of pedestrians and cars in shared spaces: using game-theoretic approach | Fatema Tuj Johora

15/20


X coordinate (m)

Expert-Based (GSFM) Vs Deep Learning Approaches (LSTM)

Y coordinate (m)

(Cheng, H., Johora, F. T., Sester, M., & MĂźller, J. P. (2020, May) ; Johora, F. T., Cheng, H., MĂźller, J. P., & Sester, M. (2020, May). 30.11.2020 | Modeling interaction of pedestrians and cars in shared spaces: using game-theoretic approach | Fatema Tuj Johora

16/20


Evaluation of Transferability HBS Data Set, Germany (Pascucci et al., 2017)

Cars: 4.935 m (mean), 3.773 m (std) Pedestrians: 1.80 m (mean), 1.383 m (std)

HBS

DUT Data Set, China (Yang et al., 2019)

Cars: 6.581 m (mean) and 4.229 m (std) Pedestrians: 5.4578 m (mean) and 9.1073 m (std)

DUT

Cars: 4.6778 m (mean) and 3.1293 m (std) Pedestrians: 3.739 m (mean), 1.734 m (std) Johora, F. T., & MĂźller, J. P, 2020

30.11.2020 | Modeling interaction of pedestrians and cars in shared spaces: using game-theoretic approach | Fatema Tuj Johora

17/20


Conclusion ü Propose a novel motion model of pedestrians and cars o o

large variety of interaction scenarios (Johora, et al., 2018; 2020; Ahmed et el., 2019) interaction recognition depending on the situation context (Johora et al., 2018; 2020; Ahmed et el., 2019, Hossain et al., 2020 )

o

transferable to different environment settings (Johora et al., 2020)

ü Propose a combined model of GSFM and a deep learning model (Johora et al., 2020, Cheng et al., 2020) ü Propose a method for recognition and modeling of different behavioral patterns of pedestrians Future Works: • Estimation of traffic performance of shared spaces using GSFM • Integration of new modalities to GSFM • Improvement of the hybrid model

30.11.2020 | Modeling interaction of pedestrians and cars in shared spaces: using game-theoretic approach | Fatema Tuj Johora

18/20


Email: ftj14@tu-clausthal.de

Y coordinate (m)

Thank You

Publications

X coordinate (m)

Johora, F. T., & Müller, J. P. (2018, November). Modeling interactions of multimodal road users in shared spaces. In 2018 21st International Conference on Intelligent Transportation Systems (ITSC) (pp. 3568-3574). IEEE. Johora, F. T., & Müller, J. P. (2020). Zone-specific interaction modeling of pedestrians and cars in shared spaces. Transportation research procedia, 47, 251-258. Ahmed, S., Johora, F. T., & Müller, J. P. (2019, May). Investigating the Role of Pedestrian Groups in Shared Spaces through Simulation Modeling. In International Workshop on Simulation Science (pp. 52-69). Springer, Cham. Johora, F. T., Cheng, H., Müller, J. P., & Sester, M. (2020, May). An Agent-Based Model for Trajectory Modelling in Shared Spaces: A Combination of Expert-Based and Deep Learning Approaches. In Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems (pp. 1878-1880). Cheng, H., Johora, F. T., Sester, M., & Müller, J. P. (2020, May). Trajectory Modelling in Shared Spaces: Expert-Based vs. Deep Learning Approach? Accepted to International Workshop on Multi-Agent-Based Simulation (MABS). Sakif Hossain, Johora, F. T., Müller, J. P., Sven Hartmann (2020). A Conceptual Model of Conflicts in Shared Spaces. Accepted to International Conference on Intelligent Traffic and Transportation (ICITT 2020). Johora, F. T., & Müller, J. P. (2020). On Intercultural Transferability and Calibration of Heterogeneous Shared Space Motion Models. Submitted to Transportation Letters Journal (Under Review) 30.11.2020 | Modeling interaction of pedestrians and cars in shared spaces: using game-theoretic approach | Fatema Tuj Johora

19/20


References Helbing, Dirk, and Peter Molnar. "Social force model for pedestrian dynamics." Physical review E 51.5 (1995): 4282. F. Pascucci, N. Rinke, C. Schiermeyer, B. Friedrich, and V. Berkhahn, “Modeling of shared space with multi-modal traffic using a multi-layer social force approach,” Transportation Research Procedia, vol. 10, pp.316–326, 2015. Rinke, N., et al. "A multi-layer social force approach to model interactions in shared spaces using collision prediction." Transportation Research Procedia 25 (2017): 1249-1267. Schönauer, Robert, et al. "Modeling concepts for mixed traffic: Steps toward a microscopic simulation tool for shared space zones." Transportation Research Record: Journal of the Transportation Research Board 2316 (2012): 114-121. B. Anvari, M. G. Bell, A. Sivakumar, and W. Y. Ochieng, “Modelling shared space users via rule-based social force model,” Transportation Research Part C: Emerging Technologies, vol. 51, pp. 83–103, 2015. F. Pascucci, N. Rinke, C. Schiermeyer, V. Berkhahn, and B. Friedrich, “Should i stay or should i go? a discrete choice model for pedestrian– vehicle conflicts in shared space,” Tech. Rep., 2018. Yang, D.; Ö zgüner, Ü .; and Redmill, K. 2018. Social force based microscopic modeling of vehicle-crowd interaction. In 2018 IEEE Intelligent Vehicles Symposium (IV), 1537–1542. IEEE. Zeng, W.; Nakamura, H.; and Chen, P. 2014. A modified social force model for pedestrian behavior simulation at signalized crosswalks. Procedia-Social and Behavioral Sciences 138:521–530. M. Aschermann, P. Kraus, and J. P. Müller, “LightJason: A BDI Framework inspired by Jason,” in Multi-Agent Systems and Agreement Technologies: 14th Europ. Conf., EUMAS 2016, ser. LNCS, vol. 10207. Springer, 2017, pp. 58–66. [Online]. Available: https://lightjason.github.io Yang, Dongfang, Ümit Özgüner, and Keith Redmill. "A Social Force Based Pedestrian Motion Model Considering Multi-Pedestrian Interaction with a Vehicle." ACM Transactions on Spatial Algorithms and Systems (TSAS) 6.2 (2020): 1-27. Yang, Dongfang, et al. "Top-view trajectories: A pedestrian dataset of vehicle-crowd interaction from controlled experiments and crowded campus." 2019 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2019.

30.11.2020 | Modeling interaction of pedestrians and cars in shared spaces: using game-theoretic approach | Fatema Tuj Johora

20/20


Backup Slides

30.11.2020 | Modeling interaction of pedestrians and cars in shared spaces: using game-theoretic approach | Fatema Tuj Johora

21/20


GSFM vs LSTM-DBSCAN

30.11.2020 | Modeling interaction of pedestrians and cars in shared spaces: using game-theoretic approach | Fatema Tuj Johora

22/20


Proposed Hybrid Model

30.11.2020 | Modeling interaction of pedestrians and cars in shared spaces: using game-theoretic approach | Fatema Tuj Johora

23/20


Motion Model : Summary

GSFM

30.11.2020 | Modeling interaction of pedestrians and cars in shared spaces: using game-theoretic approach | Fatema Tuj Johora

24/20


S E S S I O N 1 – P R E S E N TAT I O N # 3

“Detecting and stopping for occluded and emergent pedestrians” Mert Koc MS Candidate Center for Automotive Research The Ohio State University


Detecting and Stopping for Occluded and Emergent Pedestrians Mert Koc (MSc. Candidate, GRA) Center for Automotive Research (CAR) - CAR West Department of Electrical and Computer Engineering The Ohio State University 11/30/2020


Motivation of Current Work/Hypothesis • This work’s main focus is to design a vehicle controller to handle unseen dangers (occluded/unseen pedestrians). • Motivation 1: Improving autonomous driving controllers for challenging urban scenarios • Motivation 2: Using available visible information to estimate pedestrian emergence probability

• 3 scenarios to test under simulation: • Suburban scenario (one/two pedestrian, one/two parked cars, one crosswalk) • Mildly crowded urban scenario (multiple parked cars, multiple pedestrians , one crosswalk) • Very crowded urban scenario (parking slots are full, multiple pedestrians , one crosswalk)

An example scene from the simulation under Mildly crowded urban scenario 2


Overview • Proposed an FSM controller for handling the occluded/unseen pedestrian scenarios: • Using available information to assess a pedestrian emergence probability to control vehicle’s speed • A general idea that can be used for various scenarios (e.g., dense occlusions due to trees, foggy weather) • Evaluation of proposed model with baselines under 3 scenarios

• The work is currently under review, will be released soon. Acknowledgement • I’d like to thank Prof. Umit Ozguner, Dr. Ekim Yurtsever, and Dongfang Yang (PhD. candidate) for their contribution and help in research direction.

3


Thank you! Questions? Please contact me: koc.15@osu.edu

4


Break

We’ll be back at 14:10 p.m. Eastern


Session 2: Sensing the pedestrians


S E S S I O N 2 – P R E S E N TAT I O N # 1

“The development of pedestrian surrogates” Chi-Chih Chen, PhD Research Associate Professor Electrical and Computer Engineering The Ohio State University


Development of Pedestrian and Bicyclist Surrogates Dr. Chi-Chih Chen November 30, 2020

Research Associate Professor, IEEE Fellow, AMTA Fellow Electrical and Computer Engineering The Ohio State University Chen.118@osu.edu

1


Outlines

 Concept of Operation  NCAP test protocols for automatic emergency braking (AEB) system  Scattering properties of pedestrians in 76-78 GHz band  Millimeter-Wave Surrogate Pedestrian Mannequin Design

2


Near-Zone Radar Measurement Environments Conventional far-field RCS definition 2

E S σ = lim 4π r 2 r = lim 4π r 2 r 2 r →∞ Si r →∞ Ei

R>

Target Size

1mx1m

0.5mx0.5m

Fraunhofer Distance @77 GHz

513m

128m

2D 2

λ

Automobile AEB Critical Range < 50m

σ = 4π r 2

Near-Zone RCS

3

Er

Sr = 4π r 2 Si Ei

2 2


Down-Range Detection vs. Distance Drive Test Video Provided by IUPUI

4


Pedestrian Radar Cross Section (RCS)Down-Range Profile

vFSS Bounds

HPBW: 8° Continental ARS 300

http://www.vfss.net/fileadmin/Redakteur/Downloads/vFSS_Pedestrian_Specification_Eng_Version_11_2014-10-27.pdf 5


Cross-Range Detection vs. Look Angle Drive Test Video Provided by IUPUI

6


Non-Uniform Phase Effect on RCS in Near Zone @2m

@20m

@10m

@200m

50 25

45 20

40 15

35 10

30 5

25

Normalized Backscattered Power (dB)

Normalized Backscattered Power (dB)

0

-5

-10

-15

20

15

10

5

-20

0

-25 0

2

4

6

8

10

12

14

16

18

20

0

50

100

150

200

Distance (m)

Distance (m)

7

250

300

350

400


Under Illumination Effect on RCS in Near Zone

𝜎𝜎𝑓𝑓𝑓𝑓,𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑

4 4𝜋𝜋𝑆𝑆 2 4𝜋𝜋 3 𝑎𝑎𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖 = 2 = 𝜆𝜆 𝜆𝜆2

𝑎𝑎𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖 = R tan(𝜃𝜃𝐻𝐻𝐻𝐻𝐻𝐻𝐻𝐻 ⁄2).

8


Beam Width Effect on RCS Down Range Profile in Near Zone

9


Euro NCAP Pedestrian Test Protocols for Autonomous Emergency Braking (AEB) System

NCAP: New Car Assessment Program

10


NCAP AEB-VRU Test Protocol for Pedestrian Scenarios Autonomous Emergency Braking (AEB) Vulnerable Road Users (VRU) Car-to-VRU Farside Adult (CVFA) a collision in which a vehicle travels forwards towards an adult pedestrian crossing it's path running from the farside and the frontal structure of the vehicle strikes the pedestrian at 50% of the vehicle's width when no braking action is applied.

Car-to-VRU Nearside Adult (CVNA-25) a collision in which a vehicle travels forwards towards an adult pedestrian crossing it's path walking from the nearside and the frontal structure of the vehicle strikes the pedestrian at 25% of the vehicles width when no braking action is applied.

Car-to-VRU Nearside Adult (CVNA-75) a collision in which a vehicle travels forwards towards an adult pedestrian crossing it's path walking from the nearside and the frontal structure of the vehicle strikes the pedestrian at 75% of the vehicles width when no braking action is applied.

20 – 60 km/h (5 km/h increments)

11


NCAP AEB-VRU Test Protocol for Pedestrian Scenarios Distance to Point of Impact

Vulnerable Road Users (VRU) 35

VRU

DVRU = 4.5 m

30 25

Distance [m]

8 km/h

20

DVRU 15 10

DVUT

5 0

40 m Max

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

Time [s]

đ??“đ??“

Note: Angle only changes if braking occurs

30 25

20 km/h

Angle [

]

20

20 – 60 km/h (5 km/h increments)

VUT

15

40 km/h 10

60 km/h

5 0 0

0.2

0.4

0.6

0.8

1

Time [s]

12

1.2

1.4

1.6

1.8

2


Surrogate Pedestrian Targets Euro NCAP Pedestrian Targets

4a Articulating Dummy

Transportation Active Safety Institute (TASI) Surrogate Mannequin

http://www.4activesystems.at/en/products/dummies /euroncap-pedestrian.html

Q. Yi et al., "Mannequin development for pedestrian pre-Collision System evaluation," 17th International IEEE Conference on Intelligent Transportation Systems (ITSC), Qingdao, 2014, pp. 1626-1631

“Artificial Skin For Radar Mannequins”, US Patent 9,263,800, February 2016

Development supported by Toyota's Collaborative Safety Research Center (CSRC)

13


Scattering Properties of Pedestrians

14


77 GHz Human Scattering Simulation Models Initial Pedestrian Modeling using Commercial Software Pozer 9 Export CAD model to simulation software FEKO body shape, size, posture manipulation software

Electromagnetic Simulation Software

Model skin dielectric property Dry Skin Frequency [GHz]

Conductivity [S/m]

Relative permittivity

Penetration depth [mm]

76

38.102

6.626

0.41579

Frequency [GHz]

Conductivity [S/m]

Relative permittivity

Penetration depth [mm]

76

42.8

8.613

0.41049

Wet Skin

numerical model meshes

15


77 GHz RCS Pattern Simulations Description Models

Monostatic RCS sweep (θ = 90°, φ scanned [0,360):

Postures Angular resolution

Adult (male), 1.8m tall Kid, 1.26m tall Standing Running 0.05°, i.e. 20 points per degree Average: 0.9mm (≈λ / 4.4)

Plane wave incident

Mesh size

Max: 1.488 mm Min: 0.078mm

Number of triangles

4.3-5.6 million (Reduced by 80%)

Simulation time per posture

≈6-7 hours, (for adult) on 2 CPU 6 cores per CPU Intel Xeon X5660 (clock rate 2.8GHz)

Memory Usage

5 GByte

Human Skin Dielectric Constant at 76GHz[1] (Dry Skin)

εr =6.63, σ = 38.1 S/m

16


Simulated Pedestrian RCS Patterns @ 77 GHz

17


OSU 76-78 GHz RCS Pattern Measurement Setup

Standing

Walking

Measurement frequency: 76.5-77.5GHz

IF bandwidth: 100Hz

Angular sampling step size: Δθ=0.2º

Total 360º scan time: 20 mins

Frequency sampling step size:Δf=20MHz

Pedestrian postures measured: Standing & Walking

18


Measurement vs. Simulation

Measured RCS Pattern @ 77 GHz

Simulated RCS Pattern @ 77 GHz

19


RCS Contribution from Different Body Parts (a) Head

(c) Arm

(b) Torso

 Model generation software: Pozar 9.  Body dimensions: Fit male adult  Body parts studied: (1) Head (2) Torso (3) Single arm (4) Two legs

20

(d) Legs


Mean RCS Patterns of Pedestrians at 77 GHz 5o smoothed RCS pattern of mannequin

mean smoothed RCS pattern of human subjects

21


Posture Effect on RCS Patterns Max. RCS Direction

torso

M. Chen, C.-C. Chen, "RCS Patterns of Pedestrians at 76-77 GHz," IEEE Antennas and Propagation Magazine. vol. 56, no. 4, pp.252-263, Aug. 2014 22


Clothing Effect on RCS Pattern

Typical clothing has little effect on RCS patterns

23


RCS Patterns of Child at 77 GHz 5° Smoothed RCS

1.2m

RAW RCS

Lower RCS level and larger pattern variation

24


Simulated RCS of Obese Person RAW RCS

5° Mean RCS

Higher RCS happen within 30°~60°

25


Motion Captured Data of Pedestrian Motion Motion Captured Date of Walking Human

http://accad.osu.edu/research/mocap/mocap data.htm http://mocap.cs.cmu.edu/

Captured Fast Walking Gait Data

Captured Running Gait Data

26


Doppler Response Predicted From Analytical Model Captured Fast Walking Gait Data

Predicted Micro-Doppler Response RCS of a single PEC ellipsoid

B. R. Mahafza, Radar systems analysis and design using MATLAB. Chapman and Hall/CRC, 2000.

27

Captured Running Gait Data

Predicted Micro-Doppler Response


Measured Pedestrian Doppler Response (Approaching) Rx

2000

Tx

0

Leg Doppler Frequency [Hz]

1500

-20

1000 -30

-40

0

-50

Placed Foot -60

0

1

1

2

2

3

3

2000

0

Leg -10

Doppler Frequency [Hz]

1500

Torso

Arm -20

1000 -30

Radar Height: 75 cm

500 -40

0

-50

Placed Foot

-500 05

1

15

2

-60

25

3

35

4

45

5

2000

0

Doppler Frequency [Hz]

Arm

Torso

1500

-10

-20

1000 -30

500 -40

0

-50

-500

Walking Towards Radar

Radar Height: 50 cm

500

-500

4.5m

-10

Arm

Torso

-60

0.5

1

1.5

2

2.5

3

Time [s]

28

3.5

4

4.5

5

Radar Height: 100 cm


Measured Doppler Response (Crossing) Rx

Tx 1000

0

-10

Doppler Frequency [Hz]

500 -20

-30

0

Radar Height: 50 cm

-40

-500 -50

-60

-1000 1

0.5

1.5

2

2.5

3

3.5

4

4.5

5

Time [s] 1000

4.5m

0 -10

Doppler Frequency [Hz]

500 -20

0

-30 -40

-500

Radar Height: 75 cm

-50

-1000

-60

0.5

1

1.5

2

2.5

3

3.5

4

4.5

5

Time [s]

1000

0

-10

Doppler Frequency [Hz]

500

Walking Across Radar Beam

-20

0

-30

-40

-500 -50

-1000

-60

0.5

1

1.5

2

2.5

3

Time [s]

29

3.5

4

4.5

5

Radar Height: 100 cm


S E S S I O N 2 – P R E S E N TAT I O N # 2

“Fusing vision and pointcloud for faraway pedestrian detection” Haolin Zhang MS Candidate Center for Automotive Research The Ohio State University


Fusing Vision and Pointcloud for Faraway Pedestrian Detection Haolin Zhang (M.S. Candidate) Control and Intelligent Transportation Research (CITR) Lab Center for Automotive Research (CAR) Department of Electrical and Computer Engineering The Ohio State University

Nov 30, 2020


Focuses

Pedestrians

depending on distance threshold (e.g. 60m)

Faraway Pedestrians

3D Detection Peception Tasks

Nearby Pedestrians

Example of 3D pedestrian detection [1]

Intention Estimation

Example of pedestrian intention estimation [2] [1] Fürst, Michael, Oliver Wasenmüller, and Didier Stricker. "Lrpd: Long range 3d pedestrian detection leveraging specific strengths of lidar and rgb." arXiv preprint arXiv:2006.09738 (2020). [2] Fang, Zhijie, and Antonio M. López. "Is the pedestrian going to cross? answering by 2d pose estimation." 2018 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2018.

1


Contents

Faraway Pedestrian Detection 1 Introduction

2 Motivation

3 Our method

4 Experiments

5 Conclusions

Pedestrian Intention Estimation (ongoing)

2


Faraway Pedestrian Detection

3


Introduction (Faraway Pedestrian Detection) Background: 3D object (e.g. pedestrian, car, etc) detection has been receiving increasing attention from both industry and academia thanks to its wide applications in various fields such as autonomous driving. Problem Formulation: Input: sensor data ----> 3D object detection algorithm ----> Ouput: classification and localization (results are usually represented as 3D/bird's eye view (BEV) bounding boxes as shown in Figure 1)

3D detection

BEV detection

Figure 1. Examples of 3D/BEV object detection [1] [1] data form KITTI dataset (http://www.cvlibs.net/datasets/kitti/)

4


Introduction (Faraway Pedestrian Detection) Table 1. Sensors for 3D object detection

Sensors

Advantages

Disadvantages

Camera

Readily available and cheap.

Performs badly in light and weather conditions. No depth. No texture information. Expensive.

Lidar

360 degrees FoV, precise distance measurements. Robust in light conditions.

Table 2. Mainstream 3D object detection methods

Category

Methodology voxel-based

Convert the pointcloud into voxel grids and then learn the representation of each voxel

point-based

Directly learn pointcloud data

feature-based

Fuse learned features from both pointcloud and RGB

frustum-based

Use 2D detection result to assist 3D detection task

Pointcloud only

Pointcloud + RGB

5


Motivation (Faraway Pedestrian Detection)

data (image and pointcloud) form KITTI dataset (http://www.cvlibs.net/datasets/kitti/)

6


Motivation (Faraway Pedestrian Detection) 1. Learned pointcloud representations do not generalize well for faraway objects (pointcloud of faraway objects is very sparse)

2. Faraway objects are very important for fast-moving vehicles 3. The sparsity problem in pointcloud does not exist in 2D image Most methods: 3D detection & localization use pointcloud representations to find object shape

Frustum-based methods: 2D detection ----> 3D localization use pointcloud representations to find object shape

Our idea: 2D detection ----> 3D localization use pointcloud to find object centroid (position)

Figure 1. Pedestrian representations in different cases [1] [1] data form KITTI dataset (http://www.cvlibs.net/datasets/kitti/)

7


Our method (Faraway Pedestrian Detection) Frustum Generation: the 2D object information is extracted from the image by conducting instance segmentation, and then the 3D frustum is shaped by extruding the 2D semantic mask to the 3D coordinate system. Centroid Estimation: lidar pointcloud in the frustum are collected and clustered, and then the 3D object centroid is estimated. Box Regression: depending on the faraway judgment, the 3D bounding box is predicted by our Faraway Frustum Network or a state-of-the-art method.

Figure 1. Overview of the 3D/BEV object detection system based on our proposed method (Faraway-Frustum)

8


Our method (Faraway Pedestrian Detection) First stage: Frustum Generation

Step 2: 3D frustum generation

Step 1: 2D instance segmentation

9


Our method (Faraway Pedestrian Detection) Second stage: Estimation

Centroid

Step 1: centroid estimation

Step 2: faraway judgement Set faraway object threshold zth based on data distribution [1] If zi > zth, then object i is a faraway object If zi < zth, then object i is a none-faraway object

[1] data form KITTI dataset (http://www.cvlibs.net/datasets/kitti/)

10


Our method (Faraway Pedestrian Detection) Third stage: Box Regression For faraway objects

For non-faraway objects

(SOTA means any state-ofthe-art 3D detector) Faraway Frustum Network (FF-Net)

11


Experiments (Faraway Pedestrian Detection) Qualitative results:

12


Experiments (Faraway Pedestrian Detection) Dataset preparation: KITTI dataset [1]. Split to train set and validation set. Evaluation metric: average IoU(Intersection over Union) and mAP(mean average precision) Our method: Ours1 (mask), Ours2 (box) Baseline method: Ours3 (gt), SECOND [4], PointPillars [5], PV-RCNN [3], and Frustum PointNets [2] Quantitative results:

[1] Geiger, Andreas, Philip Lenz, and Raquel Urtasun. "Are we ready for autonomous driving? the kitti vision benchmark suite." 2012 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2012. [2] Qi, Charles R., et al. "Frustum pointnets for 3d object detection from rgb-d data." Proceedings of the IEEE conference on computer vision and pattern recognition. 2018. [3] Shi, Shaoshuai, et al. "Pv-rcnn: Point-voxel feature set abstraction for 3d object detection." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020. [4] Yan, Yan, Yuxing Mao, and Bo Li. "Second: Sparsely embedded convolutional detection." Sensors 18.10 (2018): 3337. [5] Lang, Alex H., et al. "Pointpillars: Fast encoders for object detection from point clouds." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019.

13


Experiments (Faraway Pedestrian Detection)

[2] Qi, Charles R., et al. "Frustum pointnets for 3d object detection from rgb-d data." Proceedings of the IEEE conference on computer vision and pattern recognition. 2018. [3] Shi, Shaoshuai, et al. "Pv-rcnn: Point-voxel feature set abstraction for 3d object detection." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020. [4] Yan, Yan, Yuxing Mao, and Bo Li. "Second: Sparsely embedded convolutional detection." Sensors 18.10 (2018): 3337. [5] Lang, Alex H., et al. "Pointpillars: Fast encoders for object detection from point clouds." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019.

14


Experiments (Faraway Pedestrian Detection)

[2] Qi, Charles R., et al. "Frustum pointnets for 3d object detection from rgb-d data." Proceedings of the IEEE conference on computer vision and pattern recognition. 2018. [3] Shi, Shaoshuai, et al. "Pv-rcnn: Point-voxel feature set abstraction for 3d object detection." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020. [4] Yan, Yan, Yuxing Mao, and Bo Li. "Second: Sparsely embedded convolutional detection." Sensors 18.10 (2018): 3337. [5] Lang, Alex H., et al. "Pointpillars: Fast encoders for object detection from point clouds." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019.

15


Conclusions (Faraway Pedestrian Detection)

Our method works better in detecting and localizing faraway objects ● Take the advantages of relatively dense image data ● Circumvent the disadvantages of directly learning sparse pointcloud representations.

Our detector is flexible ● Can be flexibly combined with a state-of-the-art method More details can be found in our preprinted paper: Zhang, Haolin, Dongfang Yang, Ekim Yurtsever, Keith A. Redmill, and Ümit Özgüner. "FarawayFrustum: Dealing with Lidar Sparsity for 3D Object Detection using Fusion." arXiv preprint arXiv:2011.01404 (2020).

16


Pedestrian Intention Estimation (ongoing)

17


Introduction (Pedestrian Intention Estimation) Background: ● meet urban environments ● interact with road users (e.g. pedestrians) ● understand intentions For nearby pedestrians, higher level tasks (e.g. pedestrian intention estimation) are much more important (report on Google’s self-driving car [2] indicates that 90% of the failures occur in busy streets out of which 10% is due to the incorrect prediction of traffic participants behavior).

Figure 1. Example of pedestrian intention estimation (source from [1]) [1] Fang, Zhijie, and Antonio M. López. "Is the pedestrian going to cross? answering by 2d pose estimation." 2018 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2018. [2] Google Self-Driving Car Testing Report on Disengagements of Autonomous Mode. Online, Dec. 2015. Accessed: 2017-03-05.

18


Problem (Pedestrian Intention Estimation)

Existing methods do not fully utilize (or even consider) pedestrian factors & contextual factors.

There are some example factors for estimating pedestrian intention [1] : Contextual factors • Street structure • Location Pedestrian factors • Implicit communication • Explicit communication • Orientation

[1] Kotseruba, A. Rasouli, and J. K. Tsotsos, "Do They Want to Cross? Understanding Pedestrian Intention for Behavior Prediction" , In Intelligent Vehicles Symposium, 2020.

19


All results are generated using the method [1] on JAAD dataset [2] and PIE dataset [3]

Images

2D detection and tracking

cropped pedestrian images

CNN

... frame n prediction result: not cross real intention: not cross

features

RNN

Prediction

... final prediction result: cross real intention: not cross

frame n+16 prediction result: cross real intention: not cross

More possible cases old people kids people with dog people with trolley people with bicycle bad weather condition

...

Case 1: occlusion

Case 2: wheelchair user

Case 3: skateboarder

[1] Saleh, Khaled, Mohammed Hossny, and Saeid Nahavandi. "Real-time intent prediction of pedestrians for autonomous ground vehicles via spatio-temporal densenet." 2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019. [2] Rasouli, Amir, Iuliia Kotseruba, and John K. Tsotsos. "Are they going to cross? A benchmark dataset and baseline for pedestrian crosswalk behavior." Proceedings of the IEEE International Conference on Computer Vision Workshops. 2017. [3] Rasouli, Amir, et al. "PIE: A large-scale dataset and models for pedestrian intention estimation and trajectory prediction." Proceedings of the IEEE International Conference on Computer Vision. 2019.

20


New idea (Pedestrian Intention Estimation) ● RGB data ● pedestrian factors (the pose, location, velocity, orientation and head/hand gestures of the pedestrian) ● contextual factors (traffic condition (light, sign, zebra crossing, etc)), road condition, vehicle behavior, weather condition, etc) Images

2D detection and tracking

CNN

fused features Pedestrian factors + Contextual factors

RNN

Prediction

CNN

21


Thanks! Haolin Zhang (M.S. Candidate) Control and Intelligent Transportation Research (CITR) Lab Center for Automotive Research (CAR) Department of Electrical and Computer Engineering The Ohio State University

Nov 30, 2020


S E S S I O N 2 – P R E S E N TAT I O N # 3

“Trajectory dataset of vehicle-pedestrian interaction in shared spaces” Dongfang Yang PhD Candidate Center for Automotive Research The Ohio State University


Trajectory Dataset of Vehicle-Pedestrian Interaction in Shared Spaces A Dataset for Crowded Pedestrians Interacting with Vehicles

Dongfang Yang, Ph.D. Candidate Center for Automotive Research (CAR) – CAR West Department of Electrical and Computer Engineering The Ohio State University Nov 30, 2020


Background • Mixed traffic scenarios, e.g., shared spaces/streets, are indispensable components of transportation systems. • Pedestrians are one of the most important participants.

A highly mixed shared plaza in Graz, Austria.

Shibuya Crossing, an extremely crowded intersection in Tokyo, Japan 2


Motivation • It is important to understand the behavior of crowded pedestrians that interact with vehicles • Assist urban design analysis • Improve pedestrian safety • Benefit autonomous driving in crowded scenarios

• A dataset for such pedestrian behavior would be helpful. • Existing public datasets do not cover such scenarios • Crowded but only for pedestrians: ETH dataset, UCY dataset • Having vehicles but limited vehicle-pedestrian interaction: Stanford Drone Dataset, inD dataset

ETH dataset

UCY dataset

Stanford Drone dataset

inD dataset 3


Overview • Contributions: created the CITR/DUT pedestrian trajectory dataset • CITR dataset: scenarios created by a series of controlled experiments • Study specific interaction patterns (e.g., front interaction, back interaction)

• DUT dataset: real interaction scenarios on campus • Capture unaffected real life interaction patterns

• Highlights: • Bird’s eye view – no pedestrian occlusion • UAV as recording equipment

• High quality trajectory – smoothed, high FPS (>20 Hz) • A fast trajectory extraction algorithm • Noise filtering

• Open-sourced: accessible on GitHub • CITR dataset: https://github.com/dongfang-steven-yang/vci-dataset-citr • DUT dataset: https://github.com/dongfang-steven-yang/vci-dataset-dut

4


Overview

Description of CITR/DUT Datasets

Methodology of Trajectory Extraction

Trajectory Results

5


CITR Dataset • Controlled Experiments • • • • •

Experiment location: a parking lot near Control and Intelligent Transportation Research (CITR) Lab Vehicle employed: an EZ-GO golf cart Recording equipment: a DJI Phantom 3 SE UAV Participants: members of Control and Intelligent Transportation Research Lab Designed six fundamental scenarios that can be pair-wisely compared.

UAV for recording Experiment snapshot

Experiment vehicle 6


CITR Dataset • Fundamental Scenarios

They are pair-wisely comparable.

Red arrows: flows of pedestrians. Blue arrows: vehicle motion

7


CITR Dataset • Fundamental Scenarios

Unidirectional Pedestrian Flow

Bidirectional Pedestrian Flows

Red arrows: flows of pedestrians. Blue arrows: vehicle motion

8


CITR Dataset • Fundamental Scenarios

Vehicle Not Exists

Vehicle Exists

Red arrows: flows of pedestrians. Blue arrows: vehicle motion

9


CITR Dataset • Fundamental Scenarios

Front Interaction

Back Interaction

Red arrows: flows of pedestrians. Blue arrows: vehicle motion

10


DUT Dataset • Scenario Configuration • Real scenarios to supplement the controlled experiments • Selected two locations on the campus of Dalian University of Technology (DUT) • Recording equipment: a DJI Mavic Pro UAV

Location 1: an irregular intersection with an uncontrolled crosswalk

Location 2: a large open shared roundabout

11


Trajectory Extraction • Methodology of Trajectory Extraction

• Video Stabilization – create same video background to improve trajectory extraction performance • 1st frame as the reference frame, match the backgrounds of remaining frames to the 1st frame • Scale-Invariant Feature Transform (SIFT) algorithm to find matching points • Random Sample Consensus (RANSAC) algorithm to find transformation

• Vehicle and Pedestrian Tracking – extract the trajectories of pedestrians and vehicles • Discriminative Correlation Filter with Channel and Spatial Reliability (CSR-DCF) algorithm • Works perfectly with fixed background

• Coordinate Transform – convert the trajectories to world coordinates • Trajectory Filtering – remove noise and smooth the trajectories • Linear/Nonlinear Kalman filter 12


Trajectory Results • CITR Dataset: • 38 clips of the controlled scenarios

• Each pedestrian has a unique ID • Total pedestrian trajectories: 340

• DUT Dataset: • 17 clips of crosswalk scenarios • 11 clips of shared roundabout scenarios • Total pedestrian trajectories: 1793

Trajectory quality. (solid: pedestrians, dashed: vehicles)

13


Trajectory Results • CITR Dataset Video Demo

14


Trajectory Results • DUT Dataset Video Demo

15


Possible Usage • Modeling of vehicle-pedestrian interaction • Pedestrian trajectory prediction in crowded scenarios • Crowd motion analysis • Pedestrian flow analysis • Group behavior modeling

• Studying individual pedestrian characteristics (using CITR dataset) • 10 participants in the controlled experiments • Each pedestrian has a unique ID

16


Conclusion • Proposed two datasets: • • • •

CITR dataset: controlled experiments DUT dataset: real campus scenarios To support the study of crowded pedestrians that interact with vehicles Filled the gap that existing works do not cover crowded pedestrian-vehicle interaction

• Future work: • Expand to more locations – adding variation • Benchmarking some methodologies of vehicle-pedestrian interaction

• Online access: • CITR dataset: https://github.com/dongfang-steven-yang/vci-dataset-citr • DUT dataset: https://github.com/dongfang-steven-yang/vci-dataset-dut

• Publication: • Yang, Dongfang, Linhui Li, Keith Redmill, and Ümit Özgüner. "Top-view trajectories: A pedestrian dataset of vehicle-crowd interaction from controlled experiments and crowded campus." In 2019 IEEE Intelligent Vehicles Symposium (IV), pp. 899904. IEEE, 2019.

17


Thanks for listening! Questions?

18


Discussion and Q&A with our presenters


Thank you for attending! Dr. Ozguner Dr. Chen Dr. Yurtsever Ms Tuj Johora Mr. Yang Mr. Zhang Mr. Koc

ozguner.1@osu.edu chen.118@osu.edu yurtsever.2@osu.edu fatema.tuj.johora@tu-clausthal.de yang.3455@osu.edu zhang.10749@osu.edu koc.15@osu.edu

For all other inquiries, please contact Jennifer Humphrey at Humphrey.113@osu.edu


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.