VISION BASED MODELLING OF THE ROBOTIC WORK CELL

Page 1

International Journal of Mechanical Engineering (IJME) ISSN (P): 2319-2240; ISSN (E): 2319-2259 Vol. 10, Issue 1, Jan–Jun 2021; 29–34 © IASET

VISION BASED MODELLING OF THE ROBOTIC WORK CELL Mitul Milind Gajbhiye1 & Amol Chaudhary2 1

2

Research Scholar, Department of Mechanical Engineering, G. H. Raisoni College of Engineering, Affiliated to R.T.M.N.U., Nagpur, Maharashtra, India

Assistant Professor, Department of Mechanical Engineering, G. H. Raisoni College of Engineering, Affiliated to R.T.M.N.U., Nagpur, Maharashtra, India

ABSTRACT This paper presents the thought of a kinematics-based progressive visual servo management approach for robotic manipulators with an eye-in-hand configuration to capture distinctive targets autonomously. The vision system is adopted to estimate the time position and motion of the target by an integrated algorithmic rule of photogrammetry and also the adaptive extended Kalman filter. The unknown intercept purpose of trajectories of the target and also the end-effector is dynamically foretold and updated supported the target estimates and is served because the desired position of the endeffector. A progressive management law is developed for the robotic manipulator to avoid multiple solutions of the robotic inverse mechanics. The end-effector is then management led by the planned control theme to approach the dynamically calculable interception purpose directly. Additionally, the framework for simulation planned during this paper will work as an honest check bench to check the performances of either a brand-new management law or a distinct dynamic algorithmic rule. As an illustration, the DeNOC based mostly dynamics was substituted with MATLAB’s SimMechanics which might conjointly perform dynamic simulation.

KEYWORDS: DeNOC, PBVS, IBVS, DLL, AEKF, NLOPT

Article History Received: 17 Apr 2021 | Revised: 28 Mar 2021 | Accepted: 04 May 2021

INTRODUCTION Force controlled robots appear to possess outreached ancient robots with passive springs at its end-effector, once it involves speedily adapting to the ever-changing setting for compliant manipulation, surface finishing tasks and assembly operations. These robots' area unit controlled victimization closed-loop with active force/torque sensory feedback. As they act closely with the setting, it becomes a necessity to exactly style and check an algorithmic rule before it's really created operational on real robots. Simulating such force/torque management algorithms with a virtual mechanism may well be one of the attainable answers to such issues. As force management algorithms have evolved over the last 3 decades (Zeng and Hemani, 1997) and completely different techniques area unit used for simulating and testing these algorithms, there's a desire to possess one platform wherever one will measure these management algorithms, or a particular mechanism dynamics algorithmic rule, before deciding to use them for a particular application. For that the dynamics is vital, significantly once the mechanism needs to move quick, e.g., for cooperative manipulation. In recent years, the utilization of robotic operations supported visual directions in trendy producing has speedily inflated. www.iaset.us

editor@iaset.us


30

Mitul Milind Gajbhiye & Amol Chaudhary

Modeling of Robotic Work Cell

Figure 1: Overview of the Proposed System, Experimental Diagram of Work Cell. Visual Servoing Visual servoing as a vigorous visual technology has attracted everyone’s attention. However, each of the 2 main visual servoing methods—position-based visual servoing (PBVS) and image-based visual servoing (IBVS) have bottlenecks that constrain the event of this field: when with success distinctive the target object, the PBVS has to reconstruct the target object in 3 dimensions, whereas the IBVS has to approximate the gap between the tips of the arm and target image. This suggests that they need higher exactness needs on depth info of the target image during a camera field of read. Robotic manipulators are extensively adopted in various fields to satisfy the growing application demands of sleight, efficiency, and automation, like missions just like the on-orbit- conjugation and active rubbish removal. Capture of targets with high accuracy and autonomy attracts increasing attention in AI, particularly in an area applications once the targets' area unit non- cooperative. Many on-orbit-servicing missions were with success performed with human participation, whereas the totally autonomous robotic capture of a particular target continues to be facing several challenges, like the target estimation and also the robotic management. The target motion estimation is important to manage the robotic manipulator to realize the specified position/motion of the end-effector. Because of the non-contact and non-invasive nature, the vision system is mostly favored in robotic management for target estimation and also the corresponding management approach is thought as visual servo.

OBJECT RECOGNITION USING IMAGE PROCESSING Machine Vision Module The vision system used for target sleuthing and trailing purpose may be mounted on a robotic manipulator, referred to as eye-in-hand, or mounted to the space, referred to as eye-to-hand. The eye-in-hand configuration could offer elaborate and detailed scene of the target, whereas, the visual info is plus the motion of the robotic manipulator. The coupling result may well be decoupled if the feedback of the state of the robotic manipulator is on the market and also the accuracy and details of the target increase because the end-effector approaches the target. Instead, the eye-to-hand configuration monitors the complete space and supply a worldwide however, less correct scene of the target. In some specific circumstances throughout the robotic operations, the read of the target could also be blocked by the robotic manipulator within the eye-tohand configuration, resulting in the trailing failure. Therefore, the eye-in-hand configuration is adopted within the current study to make sure the high management accuracy within the capture section. In visual servo management, it's imperative to stay the target within the camera’s field of read, particularly within the eye-in-hand configuration once the end-effector Impact Factor (JCC): 6.8242

NAAS Rating 3.30


Vision based Modelling of the Robotic Work Cell

31

is within the neighbourhood of the target. Common approaches within the literature adopted either wide angle cameras or motorized cameras with orientation management. However, the management of camera orientation can typically couple with management of end-effector and sophisticated the controller style. The present work is concentrated on the event of kinematics-based progressive visual servo for robotic capture. Therefore, a camera with an outsized field of read is adopted within the current study to modify the event and validation of the planned visual servo management approach. Several vision-based ways for target estimation were developed, like the offline estimation ways, geometric ways, learning-based optimization ways, and filtering- based mostly ways. The offline estimation ways, because the name implies, perform the estimation after, which can offer higher accuracy however aren't applicable in real time applications. The geometric ways, like photogrammetry, typically don't exploit the previous data gathered within the experimental method and so vulnerable to measurement noises. Position estimation using Image processing The estimation strategies supported learning and filtering technologies, like the neural extended Kalman filter, directly improve the method model online by the provided baseline model, and therefore, the neural network, that obtained high estimation accuracy in unsure setting. Or else, different adaptive filtering strategies contemplate the modelling error and different uncertainties as noises that were dynamically updated by bounding adaptive mechanisms. Supported the necessities for real time autonomous capture, Time-optimal trajectory integrated rule of photogrammetry and Adaptive Extended Kalman filter (AEKF) is utilized for real time estimation of position and rate of a particular target within the current work. Once the creation and motion of the target are calculable, a sway strategy ought to be developed to capture the dynamic target. In our previous works, the calculable current target position was assumed because the desired position within the projected management law. It was found that the end-effector can continuously outflank the rear of the target to trace and capture that is clearly not best. To deal with this issue, the present work assumes the intercept purpose of trajectories of the target and therefore, the end-effector because the desired position within the management law. Additionally, {the rate} of the end-effector at interception shall be aligned with the target’s velocity the maximum amount as doable to avoid laborious contact at capture. However, the determination of the intercept purpose is difficult since the target flight is unknown prior to because of the distinctive nature. This challenge is more difficult by the very fact that the time-variant nonlinear motion of the end-effector can induce variation of following time and have an effect on the determination of the intercept purpose. What is more, for any given position of the end-effector within the mathematician area, the inverse mechanics of the robotic manipulator should be performed to get the corresponding position within the joint area. Problem arises within the inverse mechanics wherever multiple solutions happens because of the regularity of pure mathematics functions and therefore, the redundant geometric configuration of the robotic manipulator. In our previous work, the problem of multiple solutions to direct inverse mechanics was avoided by considering physical constraints, which is downside specific. Within the current work, a kinematics-based progressive management strategy is projected to avoid the multiple solutions in a very generic manner. The projected approach is valid by experimentation employing a dynamic distinctive target and a custom engineered robotic manipulator with eye-in-hand configuration. Object detection mistreatment image process may be a methodology to convert a picture into digital type and perform some operations on that, to induce Time-optimal trajectory increased image or to extract some helpful info from it. It’s a kind of the signal dispensation during which input is image, like a video frame or photograph and output is also image or characteristics related to that image. Typically image process system includes treating pictures as two dimensional signals whereas, applying already set signal process strategies to them. www.iaset.us

editor@iaset.us


32

Mitul Milind Gajbhiye & Amol Chaudhary

a) Human Face Detection b) Object Detection Figure 2: Human Face and Object Detection in Real Time using Tensor Flow(Github). The image process primarily includes the subsequent 3 steps: commerce the image with optical scanner or by photography. Analysing and manipulating the image which has information compression and image sweetening and recognizing patterns that aren't to human eyes like satellite images. Output is the last stage during which result will be altered image or report that's supported image analysis. Optimal Trajectory generation for specific objects

Figure 3: A Comparison of 3 Totally Different Flight Profiles for the 7-DOF Kuka LWR. Best flight generation for specific objects the matter of generating time-optimal and sleek trajectories has been studied extensively in previous work. The projected flight designing techniques will be typically categorized into 2 categories: Online period designing and Offline designing. On-line period flight designing targets a dynamic and quick modification of the planned trajectories just in case of unforeseen events. The net approaches usually have faith in the manipulator’s current state, and therefore, the goal state to come up with the motion flight. A third-degree polynomial Impact Factor (JCC): 6.8242

NAAS Rating 3.30


33

Vision based Modelling of the Robotic Work Cell

curve is critical to produce a limit on the jerk of the motion. So as to seek out the best flight with a finite jerk, blocks splines and B splines are usually accustomed to represent the flight between two sequel waypoints. The flight generation will be developed as a nonlinear and non-convex constraint optimization downside, which might be solved by means that of the consecutive quadratic programming rule that decomposes the non-convex downside into consecutive bell-shaped issues. We tend to use the SLSQP problem solver from NLOPT for our implementation. With the high spatial property of the configuration area, and an oversized range of waypoints, the complexes of the optimization downside and therefore, the optimization area will increase considerably. An honest initial estimate of the flight perceptibly affects the convergence of the optimization routine. To handle this downside, we tend to apply ideas from Model prophetic management (MPC).

Figure 4: Trajectories for Various Interpolation Models on Path Designing Use Cases for (a)–(d) a Kuka LWR Next to a Table with a Parallel Gripper and (e) a Kuka KR60-3 Next to a Wall and 3 Columns with a Vacuum Gripper.

CONCLUSIONS In this paper, we've given Time-optimal trajectory approach to seek out a time-optimal flight passing through given waypoints below kinematic constraints. Not like previous approaches that model the flight between two adjacent waypoints as a blocks Spline or Linear Parabolic segments, this approach adopts a quadrangle acceleration profile to represent the flight. We tend to need this flight to manoeuvre through all waypoints whereas, exploiting the manipulator’s capabilities to cut back the motion time and guaranteeing that the kinematic limitations are continuously glad. Our projected bridged optimization approach has linear complexes with the amount of waypoints compared to a full flights optimization. Evaluations of two sensible examples from path designing have shown however, the 3 approaches compare to every different. There are some limitations that we tend to ascertain in our experiments. In cases wherever there are massive rotations between sequel waypoints, our methodology will take longer to converge but typically not give the best answer.

REFERENCES 1.

Akan, B., Ameri, A., Curuklu, B., Asplund, L.: Intuitive industrial robot programming through incremental multimodal language and augmented reality. In: IEEE International Conference on Robotics and Automation (2011)

2.

Ahmed, S., Wang, H. &Tian Y. (2018). Fault Tolerant Control Using Fractional-order Terminal Sliding Mode Control for Robotic Manipulators, Studies in Informatics and Control, 27(1), 55-64. ISSN 1220-1766.

3.

Bingul, Z., Ertunc, H. M &Oysu, C. (2005). Comparison of Inverse Kinematics Solutions Using Neural Network for 6R Robot Manipulator with Offset.In Computational Intelligence Methods and Applications Congress.

4.

Corke, P. I. (2017). Robotics, Vision & Control.Springer.ISBN 978-3-319-54413-7.

5.

Gonzalez, R. C. & Woods, R. E. (2008). Digital Image Processing (3rd Edition).Prentice Hall.

www.iaset.us

editor@iaset.us


34

Mitul Milind Gajbhiye & Amol Chaudhary

6.

G. Dong and Z.H. Zhu, Kinematics-based incremental visual servo for robotic capture of non-cooperative target, Robotics and Autonomous Systems (2018), https://doi.org/10.1016/j.robot.2018.10.011

7.

S. Ulrich, J.Z. Sasiadek, I. Barkana, Nonlinear Adaptive Output Feedback Control of Flexible-Joint Space Manipulators with Joint Stiffness Uncertainties, J Guid Control Dynam, 37 (2014) 1961-1975.

8.

S.Y. Chen, Kalman Filter for Robot Vision: A Survey, Ieee T Ind Electron, 59 (2012) 4409-4420.

9.

K.A. Kramer, S.C. Stubberud, Analysis and implementation of a neural extended Kalman filter for target tracking, Int J Neural Syst, 16 (2006) 1-13.

10. C. T. Landi, V. Villani, F. Ferraguti, L. Sabattini, C. Secchi, and C. Fantuzzi, “Relieving operators’ workload: Towards affective robotics in industrial scenarios,” Mechatronics, vol. 54, pp. 144 – 154, 2018. 11. C. T. Landi, F. Ferraguti, C. Secchi, and C. Fantuzzi, “Tool compensation in walk-through programming for admittance-controlled robots,” in Proceedings of the 42nd Annual Conference of the IEEE Industrial Electronics Society, Firenze, Italy, 2016. 12. “Compensation of load dynamics for admittance controlled inter-active industrial robots using a quaternionbased kalman filter,” IEEE Robotics and Automation Letters, vol. 2, no. 2, pp. 672–679, 2017. 13. Ilonen, J.; Kyrki, V. Robust robot-camera calibration. In Proceedings of the 15th International Conference on Advanced Robotics, Tallinn, Estonia, 20–23 June 2011. 14. Kailin, Z.; Liang, Z. 3D Object Recognition and6DoF Pose Estimation in Scenes with Occlusions and Clutter Based on C-SHOT3D Descriptor. J. Comput. Aided Design Comput.Graph. 2017, 29, 846–853.

Impact Factor (JCC): 6.8242

NAAS Rating 3.30


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.