University of PĂŠcs Pollack Mihaly Faculty of Engineering and Information Technology
Scientific Electrotechnical Conference
SCIENCE IN PRACTICE2012
SIP 2012 30th INTERNATIONAL CONFERENCE SCIENCE IN PRACTICE Pécs, Hungary, October 29-30, 2012
Proceedings Publisher Responsible: Dr. Zoltán KVASZNICZA, PHD Editor: Ildikó HORVÁTH Design and Layout Gábor SIPOS Published by University of Pécs, Pollack Mihály Faculty of Engeneering and Information Technology, Hungary 2012
PREFACE 30th Science in Practice Conference Keynote Address by Dr Zoltan Kvasznicza Distinguished Ladies and Gentlemen, It’s my great pleasure to welcome you to the Pollack Mihály Faculty of Engineering and Information Technology, which is now celebrating its 50th Anniversary. I am very glad to welcome the participants of the Science in Practice Scientific Electrotechnical Conference. It is good to share special occasions with friends. I am proud to say that our guests from the Hochschule Bremen, the Fachhochschule Würzburg-Schweinfurt, the Polytechnical Engineering College Subotica, the Josip-Juraj Strossmayer University of Osijek, and the Kandó Kálmán Faculty of Engineering have been our friends, not only colleagues for long years. The conference, which is held every year, provides a great opportunity to learn about the latest scientific and educational achievements and developments in the fields of electrotechnics, informatics, automation and robotics. The programme of Day 0 of the conference, wine tasting in Villány - in which participants took part yesterday – is always a big success with participants. I hope that beyond the official conference programme you will have time to find out about the sights, the architectural masterpieces and all those attractions of the town which makes us, local residents proud of living here. I wish you successful presentations, fruitful discussions and a very pleasant stay here during the conference. Let’s celebrate the 50th Anniversary of the faculty together!
Pécs, October 2012
POLLACK IS 50 YEARS OLD OUR ROAD FROM POST-SECONDARY TECHNICAL SCHOOL TO THE VENICE BIENNALE Dr. József Ásványi, Retired Profesor Dr. Zoltán Kvasznicza , Vice Dean William Harvey was fifty years old when he published his study on the blood circulation theory signalling a milestone in medical sciences. Charles Darwin was fifty years old when his book ’The Origin of Species’ was published causing profound confusion throughout the whole of the Christian world. István Szechenyi, ‘the Greatest Hungarian’ was fifty years old when the foundations of the Chain Bridge were laid in Budapest, connecting the two halves of the city, and bridging historical eras. It seems that the completion of the fiftieth year brought outstanding results in the lives of some outstanding personalities. Not only individuals tend to take an account of their achievements when the jubilee comes but an institution may do so as well. Our institution, the Faculty of Engineering and Information Technology of the University of Pécs celebrates its jubilee this year. It is a perfect occasion to have a look at where we started, where we are and where we want to go in the coming years and decades. The beginnings of the institution can be dated to 1962 when the reigning government decided to change the tertiary technical education into a multi-level system. Thus a network of ‘postsecondary technical schools’ was created to cater to the demands of Hungarian industry. The two predecessors of Pollack were established: the Post-secondary Technical School of Construction and Construction Materials’ in Budapest and the ‘Post-secondary Technical School of Chemomechanical Engineering’ in Pécs. As their names suggest the former institution was involved in training higher-level technicians for the construction and construction materials industries, while the latter institution did the same for the chemical industry. This event signalled the beginning of tertiary technical education in the South Transdanubian region. The next milestone in our institution’s life was 1970 when, through the merger of the two postsecondary technical schools, Pollack Mihály Technical College was established. According to its foundation charter the objective of the college was to train production engineers mainly for the construction and construction material industries. The training programmes and the degrees awarded in building electrical engineering, construction and civil engineering, technical teacher major, silicate and chemo-mechanical engineering, met the demands of the Hungarian industry of the time. Nowadays, many people praise profession-oriented training which we started as early as the 1970s. The government’s extensive home building project called for the launch of new engineer training programmes in several fields. We were the first to launch elevator engineer training at the department of building electrical engineering to provide skilled professionals for the Hungarian elevator industry.
In the history of the Faculty there are two other important dates: 1995 and 2004. In 1995 our institution was integrated into Janus Pannonius University – the predecessor of the University of Pécs, while in 2004 we became a university faculty. The institution has witnessed multiple reorganisations, relocations, modifications of the curriculum and personnel changes over the past years. We have operated in a faculty and institute form, as an independent unit and as a university unit, and our name has been altered five times because of organisational changes. Not only has our training programme been modernised recently but also our school building was refurbished in 2007, when the complete reconstruction of the Faculty was accomplished, in the scope of a PPP investment. Now it is possible to hold lectures and practice sessions with multimedia support in every room and the whole area of the Boszorkány street building provides wi-fi access for the notebooks of both teachers and students alike. We are especially glad that the plans for the reconstruction were drawn up by our architecture colleagues who graduated from Pollack and still work here. In 2003, the Breuer Marcel Doctoral School was founded in our Faculty, which provides our students with a DLA doctoral programme in architecture and a PhD educational programme. As a special field of the doctoral school, the students carry out their research through real ‘live’ tasks in industrial design, space programming, case studies, and urban and settlement development. An excellent example of this is the project of the Science Building of the University of Pécs, which was planned and designed by DLA doctoral students. The personnel development of the training can be best shown with a couple of figures. At the beginning, 2 university doctors and 2 candidates were employed in the college lecturing staff. At present, the Faculty employs 6 candidates, 4 academic doctors, 36 tutor-researchers with a DLA degree and 50 tutor-researchers with a PhD degree. Initially, in 1962, 193 students started their studies at Pollack in the above listed five branches of production engineering. This year, the number of enrolled first-year students is 853 who are provided with training in 23 different fields (see the Faculty website for details), with 7 courses for training engineer assistants, 8 basic courses (7 BSc and 1 BA), 8 master courses (5 MSc and 3 MA courses), and there are also doctoral programmes (PhD and DLA courses). Besides actively taking part in the life of Hungarian higher education in engineering, we have always placed a great emphasis on nourishing our international connections. At the beginning, understandably, this involved only the colleges and universities of so-called socialist countries but even by the beginning of the 1980s we were able to establish connections with institutes in England and Bavaria. At present, we have working relationships with 36 foreign institutes of higher education, where teacher and student exchange programs, professional study visits, common entries for tenders, conferences and certain research work are realised. Within the framework of SOCRATES/Erasmus programmes, we not only send but also receive students for training for some of their studies. Pollack Expo, which has grown from a faculty event into an engineering expo of national recognition, has been held for the seventh time to facilitate the co-operation between industry and education, as well as to get our students acquainted with the latest developments of engineering. And then we have not said even a word about the sacrificial work of our principals, deans and lecturers and their industrial developments, about the masterpieces of work bringing nationwide and worldwide reputation to our architects, and about our students’ professional acknowledgements. As a closure to our brief summary, let us boast of our latest international success.
Dr Balázs Markó and our dean, Dr Bálint Bachmann, jointly won the right to organise the Hungarian Pavilion of the 13th Venice Architecture Biennale, inviting partner institutes from both within Hungary and abroad. The exhibition of the Hungarian Pavilion, which was a great success, was designed and constructed by the doctoral students of the Breuer Marcel Doctoral School. We have turned 50 years old. We do not plan bridges, but through our planning work on a national and regional level, as well as with the organisation of Pollack Expo, we have created a bridge between education and industry. Our achievements are not as significant as Harvey’s research results but we, with the value-creating and evolving works of our tutors and students, have entered the international arena of engineering society. And what are we planning for the next 50 years? Our past predestines us to dare to dream big! In the coming years, with our creative and talented colleagues and students and plenty of work, we would like to swing the technical world!
Pécs, October 2012
TIME TABLE OCTOBER 29. (MONDAY) Plenary Session 8:00 – 9:00 Registration in the ‚Pollack’ Campus (Pécs, Boszorkány út 2.) 9:00 – 9:30 Bus leaves for ‚Szentágothai’ Research Centre 10:00 – 12:30 Plenary Session in ‚Szentágothai’ Research Centre 12:30 – 13:00 Reception at ‚Szentágothai’ Research Centre 13:00 – 14:00 Presentation about the building of the ‚Szentágothai’ Research Centre by Bálint Bachmann DLA 14:00 – 14:30 Bus leaves for ‚Pollack’ Campus 14:30 – 15:00 Snack lunch (sandwiches) served at ‚Pollack’ Campus 15:00 – 15:30 Opening Session of the Symposium 15:30 – 16:00 Symposium Photo, Cofee Break 16:00 – 18:30: Session 1 Chair: dr. József Ásványi 16:00 – 16:05 OFFICIAL OPENING – dr. József Ásványi 16:05 – 16:25 S. F. Peik, T. Henning, D. Robben: LASER-MACHINED MICRO SWITCHES FOR RADIO-FREQUENCY APPLICATIONS 16:25 – 16:45 Viktor Bagdán, Kálmán Máthé, László Czimerman, József Pytel: ELECTRONIC DEVICE FOR PREVENTING HEARING-LOSS 16:45 – 17:05 Peter Möhringer: INTRODUCING S3D-TV FERNSEHTECHNIK“
INTO
THE
„REPETITORIUM
17:05 – 17:25 Michael Hartje: DIGITALE SPRACHCODIERUNG MIT VOCODER BEI NIEDRIGEN BITRATEN (EXPERIMENTALVORTRAG) 17:25 – 17:45 János Simon, István Matijevics: REMOTE CONTROL OF ANTHROPOMORPHIC ROBOTIC PLATFORM FOR SOCIALLY ACCEPTABLE AND ADEQUATE INTERACTION IN HUMAN’S WORKING ENVIRONMENT 17:45 – 18:05 György Elmer: MEMRISTOR IN ESD PROTECTION 18:05 – 18:25 Zoltán Kvasznicza: DISTURBANCES EMITTED TO ELEVATOR DRIVE APPLICATIONS 19:00 – 21:30: Symposium Dinner (Restaurant ‚Tettye’)
THE
ENVIRONMENT
BY
OCTOBER 30. (TUESDAY) 9:00 – 11:00: Session 2 (A202) Chair: Dr. György Elmer 9:00 – 9:20 Srete Nikolovski, Zvonimir Klaić, Krešimir Fekete: POWER QUALITY INDICES OF THE FIRST ON THE GROUND PV POWER PLANT IN EASTERN CROATIA 9:20 – 9:40 János Füzi: DESIGN AND PERFORMANCE ANALYSIS OF RADIOFREQUENCY ADIABATIC NEUTRON SPIN FLIPPERS 9:40 – 10:00 Milan Ivanovic, Hrvoje Glavas, Dubravka Spiranovic-Kanizaj: ENERGETIC EFFICIENCY AND THE RENEWABLE ENERGY SOURCES IN THE SLAVONIA REGION 10:00 – 10:20 Michael Hartje: LEISTUNGSGRADIENTEN VON WINDKRAFTANLAGEN 10:20 – 10:40 Gergely Nyitray: WAVE CONTRACTION IN FREE SPACE BY SELF-CONFINING WAVES 10:40 – 11:00 Vedrana Jerkovic, Zejlko Spoljaric, Kresimir Miklosevic, Zeljko Hederic: STABILITY PREDICTION OS A SMALL BIOGAS PLANT IN ELECTRIC POWER SYSTEM 9:00 – 11:00: Session 3 (A201) Chair: Péter Megyeri 9:00 – 9:20 Zsolt Markella, Tibor Vizkelety: SÉRÜLT SZEMÜREG REKONSTRUKCIÓJÁNAK TERVEZÉSE 9:20 – 9:40 Antal Ürmös: EPITAXIÁLISAN NÖVESZTETT RÉTEGEK ECV MÉRÉSSEL MÉRT KONCENTRÁCIÓ PROFILKORREKCIÓJA 9:40 – 10:00 Tibor Malkó, Péter Megyeri: VIRTUAL REALITY BASED SIMULATION ENVIRONMENT FOR AUTONOMOUS INTELLIGENT ROBOTS 10:00 – 10:20 Zoltán Zidarics: NEW DEAL IN INDUSTRIAL PROCESS CONTROLLING AND VISUALIZATION 10:20 – 10:40 Kálmán Máthé, Zoltán Vizvári, Péter Odry, Ferenc Henézi: SOKCSATORNÁS DSP ALAPÚ, KOMPLEX ELEKTROMOS IMPEDANCIA MÉRŐ RENDSZER FEJLESZTÉSE ÉS ALKALMAZÁSI LEHETŐSÉGEI 10:40 – 11:00 Zsolt Molnár: TEREPI REGISZTRÁLÓ MŰSZEREK HIDROLÓGIAI ÉS GEOLÓGIAI MÉRÉSEKHEZ
11:00 – 11:30: Cofee Break 11:30 – 12:50: Session 4 (A202) Chair: Dr. Gergely Nyitray 11:30 – 11:50 Zoran Balkic: NoSQL IN WIRELESS SENSOR DATA STORAGE 11:50 – 12:10 Denis Vranješ, Snježana Rimac-Drlje, Mario Vranješ: OVERVIEW OF UPSCALLING METHODS FOR SCALABLE CODED VIDEO 12:10 – 12:30 Zoran Balkic: VISUALIZATION OF DYNAMIC SPATIOTEMPORAL DATA 12:30 – 12:50 Goran Horvat, Damir Šoštarić, Drago Žagar: REMOTE ENVIRONMENTAL NOISE MONITORING USING WIRELESS MULTIMEDIA SENSOR NETWORKS 11:30 – 13:10: Session 5 (A201) Chair: Péter Megyeri 11:30 – 11:50 Damir Šoštarić, Goran Horvat, Drago Žagar: OUTDOOR QUADRICOPTER TRAJECTORY TRACKING WITH UWB TAG LOCATOR 11:50 – 12:10 Igor Fuerstner, Laslo Gogolak, Szilveszter Pletl: SOLUTION DIVERSITY FOR A SPECIFIED MECHATRONICS
PROJECT
IN
12:10 – 12:30 Tomislav Matic, Milijana Zulj, Zeljko Hocenski: COMPARISON OF GENERAL PURPOSE GRAPHIC PROCESSOR UNITS AS SUBSTITUTION OF TRADITIONAL PROCESSORS 12:30 – 12:50 Franciska Hegyesi: BLENDED LEARNING IN ADULT EDUCATION IN THE ÓBUDA UNIVERSITY 12:50 – 13:10 Ildikó Horváth: POSSIBILE APPLICATIONS OF PROBLEM BASED LEARNING IN ENGINEERING IN TERTIARY EDUCATION 13:10 – 13:50: Lunch 14:10 – : Visit at the ‚Zsolnay’ Cultural Quarter
TABLE OF CONTENTS ELECTRONIC DEVICE FOR PREVENTING HEARING-LOSS.......................................11 OUTDOOR QUADRICOPTER TRAJECTORY TRACKING WITH UWB TAG LOCATOR.................................................................................................15 OVERVIEW OF UPSCALING METHODS FOR SCALABLE CODED VIDEO................21 BLENDED LEARNING IN ADULT EDUCATION IN THE ÓBUDA UNIVERSITY......29 DESIGN AND PERFORMANCE ANALYSIS OF RADIOFREQUENCY ADIABATIC NEUTRON SPIN FLIPPERS...................................................................................................35 REMOTE ENVIRONMENTAL NOISE MONITORING USING WIRELESS MULTIMEDIA SENSOR NETWORKS.................................................................................41 MEMRISTORS IN THE ESD PROTECTION.......................................................................51 POSSIBILE APPLICATIONS OF PROBLEM BASED LEARNING IN ENGINEERING IN TERTIARY EDUCATION.......................................................................................................55 SOLUTION DIVERSITY FOR A SPECIFIED PROJECT INMECHATRONICS...............59 ENERGETIC EFFICIENCY AND THE RENEWABLE ENERGY SOURCES IN THE SLAVONIA REGION...............................................................................................................63 DISTURBANCES EMITTED TO THE ENVIRONMENT BY ELEVATOR DRIVE APPLICATIONS.......................................................................................................................71 SÉRÜLT SZEMÜREG REKONSTRUKCIÓJÁNAK TERVEZÉSE.......................................75 TEREPI REGISZTRÁLÓ MŰSZEREK HIDROLÓGIAI ÉS GEOLÓGIAI MÉRÉSEKHEZ.................................................................................................79 VIRTUAL REALITY BASED SIMULATION ENVIRONMENT FOR AUTONOMOUS INTELLIGENT ROBOTS........................................................................................................89 INTRODUCING S3D-TV INTO THE „REPETITORIUM FERNSEHTECHNIK“..........93 POWER QUALITY INDICES OF THE FIRST ON THE GROUND PV POWER PLANT IN EASTERN CROATIA..........................................................................................................97 REMOTE CONTROL OF ANTHROPOMORPHIC ROBOTIC PLATFORM FOR SOCIALLY ACCEPTABLE AND ADEQUATE INTERACTION IN HUMAN’S
WORKING ENVIRONMENT..............................................................................................103 COMPARISON OF GENERAL PURPOSE GRAPHIC PROCESSOR UNITS AS A SUBSTITUTION FOR TRADITIONAL PROCESSORS....................................................109 EPITAXIÁLISAN NÖVESZTETT RÉTEGEK ECV MÉRÉSSEL MÉRT KONCENTRÁCIÓ PROFIL KORREKCIÓJA.....................................................................115 NOSQL IN WIRELESS SENSOR DATA STORAGE...........................................................119 VISUALIZATION OF DYNAMIC SPATIOTEMPORAL DATA........................................123 NEW SOLUTIONS IN INDUSTRIAL PROCESS VISUALIZATION AND CONTROLLING....................................................................................................................129
10
ELECTRONIC DEVICE FOR PREVENTING HEARING-LOSS Viktor Bagdán1, Kálmán Máthé2, László Czimerman2, József Pytel MD.3 1. Department of Health Sciences Doctoral School, University of Pécs, Hungary; 2. Mihály Pollack Department of Engineering and Information Technology, University of Pécs, Hungary; 3. Medical School Department of Otorhinolaryngology, Head and Neck Surgery, University of Pécs, Hungary
Key words: Hearing-loss, Hearing-aid, Ear distortion, Non-Linear Transfer Function, Sensation of Volume
Background:
Hearing loss caused by high Sound Pressure Levels (SPL) is a huge and growing problem nowadays. The ascendance of the threshold of audibility is proven by statistic numbers among teenagers, it means that our ear started to adapt to the louder sounds. The cause is the drawingaway from our natural environment, and this process is accelerated nowadays. One example is the MP3, MP4 players with (plug-in) earphones commonly used by teenagers. These devices can provide very high sound pressure levels due to the fact, that they use extremely effective class-D amplifiers inside, and the high SPL can be uphold for a long time. The equivalent sound pressure level projected to 8 hours, which causing hearing damage is: 85dBA. These devices can reach this limit, and more very easily. The human ear accommodates to the changed and higher sound pressure levels in 10-15 minutes, so the hearing loss/damage can be undetected. This danger can arise in different shows, concerts. The sound pressure level in a rock concert is 105dBA, if we want to reach the desired musical experience, but this level is the same in a symphony orchestra in tutti position, without amplification. This limit can be tolerated by human ear of course, but not for 8 hours. Other problems are the increased noise levels and noise contamination, which are caused by the powerful amplifiers which can reach high sound pressure levels, and our more and more noisy and crowded environment. The increased noise level is not only a problem independently, but it is also by the indirectly effected higher sound pressure levels. For the speech intelligibility or for the musical experience there is a minimal desired Signal To Noise Ratio (SNR). So, if we
increase the noise level by 20dB, we have to increase the signal level by 20dB also, and the nowadays used devices can do it easily. When the amplifiers appeared in the dawn of electronics, in the age of tubes, there were no commonly used amplifiers which can provide high sound pressure levels, what we have now. So they didn’t diagnose numerous hearing losses among teenagers that time. There were some exceptions of course, but mainly among musicians. The decay of hear among elder people is not a recentness. This happens not only due to the extreme load of the ear, but it usually eventuates later in life. The loss of higher frequencies is typical, but beyond a certain limit it worsens the intelligibility of speech. However, the base harmonics of human voice are not beyond 4kHz, but the overtones belong to the full picture, and the overtones are in the lost range. The elder hearing decay is not only worsen the intelligibility of speech, but it ruins the qualitative listening to music, because a relevant part of the musical information fallen into the lost range.
Challenge:
Our aim was to create a device which can provide the desired sensation of volume, but there will be no any damage to the ear, so the measurable sound pressure level still low. In addition our aim was that the device should not tone the sound, not worsen the musical information and sound, not distort audibly. Furthermore, our another aim was that the device should bring on the higher frequencies from the lost range to the lower range, as far as possible, hereby it could be heard by hearing 11
damaged people, too.
Brief description of the invention:
The object of the invention is a simply feasible electronic device. By the help of it, the natural sound experience is obtainable, with low sound pressure level. This electronic arrangement can raise the sensation of volume, without modifying the tone, and it covers total linear frequency transfer function in the full dynamic range. The amplification method affects advantageously the SPL/Sound Perceptional ratio by psycho-acoustic considerations, and it increases the perceived dynamic range. There exist procedures, by help of them the sensation of volume can be raised, without increasing the measurable physical sound pressure level. Those methods, which use the Equal-loudness contours (Equal-loudness contours are often referred to as „Fletcher-Munson”’ curves), amplify differently different frequencies. The line of our research was not this. The procedures used presently can effect unpleasant sound sensation, because of the evolved unnatural tones, as opposed to the object of the invention. The presently used sound sensation increasing procedures use compressors, or enrich the overtones in a wide-range frequency spectrum. On the other hand, with the method described in the invention the increased volume sensation is obtainable, without sound quality fall, by setting up an overtone range likely to the ear distortion. The device tries to imitate precisely the psycho-acoustic parameters about the volume sensation of human ear, known so far. As a matter of fact the instrument generates frequency dependent overtone enrichment. We modify the ratio of even and odd overtones correlate with the base harmonic, as a function of dynamics, signal level and frequency. The parameters of the device can be measured precisely, adjustable and reproducible. But the sound character and musical experience cannot be defined empirical, but testable statistically by the principle of numerous average. The circuit realizable with common analogue technology, and with the help of digital signal processors (DSP), too.
Further targets, challenges: 12
The device is equipped with several control points, and by the help of them, the different parameters are adjustable. Hereby the instrument is tailor-made. Since everybody hears differently, everyone has slightly different parameter-list. However there exist parameters, which provide acceptable/good results for almost everyone. We should perform numerous tests to precisely determine the parameters. Furthermore, it would be practical to lay down the special parameters for hearing handicapped / hearing damaged persons. The prototype is available now in analogue construction, we have to design and build the digital variant also, because of the adaptability with several digital devices. There might be the possibility to decrease the official SPL limit values, or set up a new weighting/measuring method with respect of the nonlinearity of human hearing.
Possible applications:
The main benefit of the new arrangement is to prevent the nowadays increased hearing loss, caused by the world-wide spreading music listening with earphones/headphones. We will be able to use the module beneficially in hearing aid, due to it can bring back the sensation of lost frequencies. The invention can largely assist in the research of the nonlinear distortion behavior of human hearing. The module can be used in every electronic device, where the sound amplification is desired (headphone amplifiers, MP3 and MP4 players, complete sound amplifiers, PA-systems, in High-End and Audiophile technology, car sound systems, multi-channel surround sound systems). The device can be connected to the already exists analogue or digital instruments, like a docking module.
IP Status:
Patent pending.
References:
[1.] Fletcher, H. and Munson, W.A. „Loudness, its definition, measurement and calculation”,
Journal of the Acoustic Society of America 5, 82-108 (1933). Audiológia / Dr. Pytel József / Victoria Kft., 1996, Budapest / ISBN: 9637660-60-7 [2.] Patent : 110798-13773E/SZT, „Emberi fül torzítását modellező eszköz, valamint eljárás hangjel feldolgozására” (Patent Pending in Hungary)
13
OUTDOOR QUADRICOPTER TRAJECTORY TRACKING WITH UWB TAG LOCATOR Damir Šoštarić, Goran Horvat, Drago Žagar Department of Communications Faculty of Electrical Engineering J.J.Strossmayer University of Osijek Kneza Trpimira 2b, 31000 Osijek, Croatia email: damir.sostaric@etfos.hr, goran.horvat@etfos.hr, drago.zagar@etfos.hr
Abstract: The paper presents quadricopter flying in outdoor conditions with special tasks. Visual tracking of quadricopter is realized with UWB technology. UWB tracking software shows path of moving tag. Small UWB tag device is mounted on quadricopter. With software dynamic refreshing and synchronization in real-time path drawing process is obtained. UWB transmits a signal over multiple bands of frequencies simultaneously, from 3.1 GHz to 10.6 GHz. UWB tags consume less power than conventional RF tags and can operate across a broad area of the radio spectrum. Ubisense, which UWB technology we had available, uses active tags, which the company calls Ubitags, and readers, or Ubisensors, operating from 5.8 to 7.2 GHz. Ubisensor readers are mounted on pillars and with double LAN cable connected each other. Minimum delay of transmission is secured in this way which provides precise locating of the active tag. In order to increase the accuracy of active tag it can be used precision GPS or we need passes through multi-point calibration. With restated battery in terms of higher current and higher capacity longer flight time and flip at low altitudes are achieved. Weight limit with additional parts on quadricopter are tested. Subjective sense is also defined by the various controllers based on AndroidOS and iOS devices. Tested model of quadricopter ARDrone 2.0., demonstrated outstanding ability in absolute control mode. During launching and landing system has ultrasonic rangefinder and automatic ability for soft take off and landing. Keywords: Quadricopter flying, UWB tag, Exterior columns, UWB synchronous master/slaves node, Trajectory path, Quadricopter; launching, landing, maneuver, flips.
Acknowledgment
This work was sponsored by the XBee Team: http://www.xbee.tv from their own resources. Special thanks to collegue Marijan Herceg for lent UWB hardware set of new technology. I. Introduction Outdoor tracking system of moving objects today is very represented in scientific and military robotic and communication area of research. Since the beginning of the information technology era many ways of detecting objects are discussed. Indoor tracking system is so covered with transmission and reception or just the reception technique based on infrared spectrum. Infrared light source was suitable for indoor object tracking, but when this system is integrated outdoor, noise is generated due to
the limitations of distance and interference are obtained from other light sources e.g. sunlight or streetlight. Indoor system usually use infrared light source while sensor is standard infrared camera whose signal passing through the remotely dislocated analyzer software installed on workstation [1]. Indoor trajectory tracking system usually is used for precise positioning of objects. More demanding practical applications with strictly defined routes are considered. Object compensation algorithms for this application are realized thought more sensor/ receiver unit. In indoor system of tracking interference or disturbance on object is small because don’t have wind and other atmospherics influence. Outdoor systems work on slightly principles. Usually all principles are based on RF (radio 15
frequency) field. In dependence on used technology, system is more precise or not. RFID (Radio-frequency identification) technology is appropriate for small surface area [2]. Node multiplication ensures faster tag localization and accuracy. Using different frequency bands can be avoided interference on devices e.g. (HF; High frequency at 13.6MHz and LF; Low frequency at 125kHz ). Mentioned HF and LF frequency bands can be classified into LR (Long Range) and SR (Short Range) transceiver devices. RDID localization accuracy is ensured by selection of appropriate active tag with quality software algorithm calibration. NFC (Near field communication) technology for outdoor localization is not suitable because it work on touching receiver and transmitter together or bringing them into close proximity, usually no more than a few centimeters. ZigBee (IEEE 802.15.4) technology has low communication speed which is not necessary for quality measurement of tag/node position. Node localization is based on one or two coordinator with the emphasis on multiplication of router/ end devices for better calculation and self learning algorithm based on propagation model. Model integration in appropriate application e.g. android, system has the advantage compared to previously mentioned technologies. Main advantages of ZigBee technology is LR (0.3 – 1.6km in theory and 0.1 – 1km in actual real conditions). Fairly new technology UWB (Ultra Wide Band) proved to be the best solution for outdoor localization of tag [3]. Flying object like Quadricopter requires small tag weight that has a very low power consumption of energy. Tag with its own power supply and receiver unit is showed on Figure 1. Tag is enlarged in relation to the receiver unit. Button on the active tag is available for turn on and off. Integrated switch in receiver allows six units (hex topology of the network). A trajectory measurement in our case of this paper was done with four receiving units.
16
Figure 1. Active tag and receiver unit (rear and front view)
Selecting appropriate UWB technologies was approached to monitoring and localizing of flying object (Quadricopter). Software calibration is done with active tag in several outdoor locations with the time delay in order to save the time lag logged parameters. II. Outdoor surface area 2d and 3d view
2D randomly installed UWB receiver network nodes
During creation of receiver’s network it is taken into consideration for achievement rectangular layout. Outdoor pillars are mounted and connection is done. With software synchronization and active tag calibration system was ready for first measurement. During monitoring sessions is selected scenario which passes through key marginal points. Each pillar has receiver unit and corresponding MAC address, Figure 2. Flying area is defined on length average 10m and width 6m. Total surface area is 57m2 measured with laser device and confirmed by the software after calibration of receivers and active tag. After randomly selected grid for calibration is passed trough and remained at approximately 20 seconds at each envisioned point.
Figure 3. Outdoor surface 3D layout tracking trajectory isometry
III. Flying device and subjective sense of command device interface with remote video live view
Figure 2. Outdoor surface 2D layout of the receiver nodes
3D positioning of UWB active tag on Quadricopter Live view of Quadricopter position is showed trough ubisense software. Desktop screen recorder (CamStudio) is created animation of moving object and on this way trajectory track is stay in isometry. Maximum height for receiver node sensitivity is adjusted to 3m while pillars are on 2.54m. Installed UWB receivers are synchronized via Ethernet cable while positioning the active tag is done by adequate software. Tracking trajectory shows track of moving object at the accuracy of 1 – 3 cm, Figure 3. Creation of real-time digital logging cross software is possible to create regulatory system for embedded device like flying object. Quadricopter communication is based on IEEE 802.11 standard and integration of known commands recorded trough network protocol analyzer can be used for future preprogramming. API for programming is available and with some modification is possible to make maneuver of flying object.
In this section will be displayed and described subjective sense when operating with Quadricopter. Flying object – Quadricopter is selected from ARDrone series. ARDrone 2.0 has integrated two camera and software selection on each video source. Remote controller for flying object is based on IEEE 802.11 standard and with any devices based on iOS5 or higher like Android 2.3 or higher is possible to control and change flying parameters. With live video view is better control and locating in the area. Laboratory testbed is configured with two batteries and two control devices. Original battery (1000mAh) has autonomy 12 – 15 min, while integration of 1350mAh (25C) battery Quadricopter stay in air 20 min. Subjective sense based on battery source is that new battery (1350mAh) has aggressive effect on motors. This mean that peak current is bigger and shorter response time on command. Such advantage characterizes flying hardware optimization and proof that Quadricopter can work in extremely conditions then now. Flying object – Quadricopter Quadricopter outdoor trajectory tracking is realized on modified ARDrone 2.0 (Parrot) [4]. Integration of custom compiled version of linux system has better performance for flying. Quadricopter main features in new v2.0 compared to v1.0 is high-definition camera with video recording on remote controller (control device). Application selectable is USB recording or “on control device” recording. Additional features are also flight data sharing and new pilot 17
mode with increased stability. While in flight, the Quadricopter’s front camera transmits realtime what the Quadricopter sees onto the pilot’s device screen. HD camera resolution 1280x720 shows a view from the sky in real-time. This is suitable for remote video analyzing and ability for orientation in the area. System becomes more stable if we consider method in access from several directions of autonomous fly. The most interesting bits are transmitted higher resolution video and an onboard barometer sensor that allow it to fly higher than sonar range outside. Phone/tablet app v2.0 also has a better FPV (First Person Video) mode. Advantages of used Quadricopter model compared to existing solutions are: 3 – axis accelerometer, a 3 – axis gyroscope, a 3 – axis magnetometer and a pressure sensor which completes the device and provides great vertical stability [3]. Sensors are digital designed in MEMS technology. Faster response on command is the result of MEMS (on chip) technology and high-speed CPU in the regulatory system of Quadricopter. On the bottom are ultrasound sensor, which analyze flight altitude up to 6 meters. With outdoor hull system is protected against external impact influence. Quadricopter principle of work is that each pair of opposite rotors is turning the same way, Figure 4. One pair is turning clockwise and the other anti-clockwise. Mechanical structure comprises four rotors attached to the four ends of a crossing section of hardware. Motor turning direction and axis with flying mode are also represented in Figure 4. Thus exist two modes of flying (+) and (X). X mode of flying is integrated in ARDrone design example.
movement. Varying each rotor pair speed the opposite way yields yaw movement. This allows turning left and right, Figure 5.
Figure 5. Quadricopter movements a) Throttle, b) Roll, c) Pitch, d)Yaw
Figure 6. shows frame of outdoor hull system with customized non-original battery. Blades protection is insured with flexible framework like embedded control electronic. Original battery with 1000mAh disposes with 10C continuous discharge with charge at 1A, while custom designed 1350mAh battery disposes with 25C continuous discharge. Value by index C represents proportional greater current continuous discharge. When the value C is greater, battery system has better possibility to insure required current for flying object. Figure 7. shows Quadricopter without outdoor and indoor hull. With USB connector on embedded electronic is possible to make some interfaces. Flash drive and recording is default supported while is possible to connect additional hardware e.g. any serial device with RS232 protocol because is possible to install FT232 chip driver [6]. Additional USB camera connection is insured but is necessary compile driver under ARDrone linux. Kernel support is very important for this platform of device OS chip.
Figure 4. Motor turning direction
Maneuvers are obtained by changing pitch, roll and yaw angles of the Quadricopter, Figure 5, [5]. Varying left and right rotors speeds the opposite way yields roll movement. This allows to go forth and back. Varying front and rear rotors speeds the opposite way yields pitch 18
Figure 6. Outdoor hull frame for blades and system on board protection
Quadricopter also disposes with 10 pin header connector on the bottom. This connector is command and data port in full-duplex communication direction. Input value can be command for flying via additional embedded devices or GPS data collector for self localization.
Figure 8. iOS application status and HD camera video streaming
Application interface on Android is identical while the specified commands have slow response. Android WiFi remote control is based on htc Desire device, Figure 9.
Figure 9. HD camera video streaming on Android OS Figure 7. Frame body with battery and USB connector
Application for control Quadricopter is FreeFlight with open-source API community [7]. Android API source is still limited on some function while iOS has full developer support. Control user interface devices During testing for iOS platform is used iPad 3 device in function management and control device. iPad 3 command set is integrated in TCP/UDP protocol packet with full-duplex communication in real-time. Same transport layer is integrated on Android OS device platform. Differences between iOS and Android OS are on application layer. Better performances has iOS with additional command line for flip in different direction while Android has only one default way. iOS application status device firm‌ Interface sense for iPad 3 has better response on transmitted control signals. Subjective sense for flying is incomparable compared to the Android. Updated firmware and application layer modification is showed on Figure 8. with live view of video streaming in HD resolution.
Live
video syncronization of fly from two
PC Test case for recording video parameters is realized with recording three different video streams. First screen is manifested with II. quadrant of video sample and describe flying object trajectory in 3D isometric dynamic view, Figure 10. [8]. Video is recorded from PC desktop with CamStudio Recorder. UWB localization software traces a path of flying object and create logging file for real-time or future analysis. Figure 10. in I. quadrant represents recorded video from Quadricopter. IV. quadrant represent recorded video from mobile phone HD camera. III. quadrant shows sequence of events animation (take off, occurred disturbance and landing). Program software tool for edit video synchronization and design of final movie is CyberLink PowerDirector [8]. Flying objects recognition in the air is made possible by adding conture effect on video. Effect is applied off-line but future work is to make real-time analyzing method for on-line post processing of video from external static camera video source. Recognition outdoor static objects and calculation of their own positions is possible by applying conture effect on video recorded on sources and
19
Quadricopter [9]. Known position points on static object are defined at least two reference points while the Quadricopter camera is used for measuring angles. Positions is calculated using embedded device processor with Q15 trigonometry calculation. Q15 is system library designed for faster trigonometry calculation based on low frequency microcontrollers.
Precision GPS on UWB receiver node can increase the accuracy of active tag. Future work is integrating GPS receiver unit on Quadricopter and measuring data of NMEA protocol cross XBee PRO module. Application later can be used for static analysis and algorithm integration on dynamic movement object.
References
[1.] D. Mellinger, A. Kushleyev, and V. Kumar, “Mixed- Integer Quadratic Program Trajectory Generation for Heterogeneous Quadrotor Teams,” IEEE International Conference on Robotics and Automation, May 2012.
Figure 10. Video stream synchronization of flying scenario
[2.] X. Huang, R. Janaswamy, and A. Ganz, “Outdoor Localization Using Active RFID Technology,” Broadband Communications, 3rd International Conference on Networks and Systems, 2006. BROADNETS, San Jose, CA, ISBN: 978-14244-0425-4, October 2006. [3.] M. Herceg, T. Švedek,and T. Matić, “Pulse Interval Modulation for ultra-high speed IRUWB communications systems,” EURASIP Journal on Advances in Signal Processing, 2010. [4.] Model of Quadricopter ARDrone 2.0 (Parrot): http://ardrone2.parrot.com/
Figure 11. Video stream synchronization of flying scenario with conture effect
IV. Conclusion and future work UWB technology used for outdoor locating of active tag requires small power computing resource. Power consumption of UWB active tag is small and adequate for tracking time in application. With single computer is possible tracking and calculating trajectory path. UWB user interface is very user friendly and with external opportunities from software is possible to make control channel for Quadricopter. Until now is just recorded video with identification of process but future work is to make controlling interface with long range communication. Also integration of OpenCV on Quadricopter Linux would allow the second channel for localization of objects through video analysis. Integration of analysis algorithm in OpenCV is considered from static camera on the ground and second computer in Cloud synchronization. 20
[5.] M. Orsag, M. Poropat and S. Bogdan, “Hybrid Fly-by-Wire Quadrotor Controller,” Automatika 51, pp. 19-32., 2010., ISSN 0005-1144 [6.] FT232 USB Linux support: http://www.ftdichip. com/Products/ICs/FT232BM.htm [7.] FreeFlight application available on AppStore and GooglePlay: https://itunes.apple.com/us/app/freeflight/ https://play.google.com/store/apps/details/freeflight [8.] Video syncronization of fly scenario: http:// w w w. y o u t u b e . c o m / w a t c h ? v = M M e W _ ECbNcQ&feature=youtu.be Video syncronization of fly scenario with added conture effect: http://www.youtube.com/ watch?v=Ap-Znz1SWG4&feature=youtu.be
OVERVIEW OF UPSCALING METHODS FOR SCALABLE CODED VIDEO Denis Vranješ, Snježana Rimac-Drlje, Mario Vranješ Department of Communication Faculty of Electrical Engineering Osijek Osijek, Croatia email: denis.vranjes@etfos.hr
Abstract: Due to different characteristics of video displaying devices as well as the video transmission over networks with various transmission conditions, the usage of scalable video coding has been increasing ignificantly. Three types of scalability are available: spatial, temporal and quality scalability. Depending on user’s device capabilities, in many occasions there is a need for the upscaling of the video coded on a lower spatial or temporal scale. This paper gives the overview and comparison of several existing different spatial video upscaling methods. Keywords: scalable video coding, upscaling, scalability, video quality evaluation I. Introduction Recently there has been a rapid development in creating and using of multimedia applications. Therefore, efficient video transmission over heterogeneous networks is very important. Since network conditions are time variant, optimally usage of network in each moment is essential. It is necessary to transmit the video material with the highest possible quality in each moment, taking into consideration that a bit-rate of transmitted material doesn’t exceed the network capacity. The efficient video transmission over different networks remains one of the challenging goals for multimedia communications because of limited capacity of the networks and user’s Quality of Service (QoS) requirements. The efficiency of the coding and the transmission of video materials is increased using scalable video coding (SVC). By using the spatial scalability, video material can be coded on different resolutions. The emergence of high definition displays in recent years (e.g. 1280x720 and 1920x1080 or higher spatial resolution), along with the proliferation of increasingly cheaper digital imaging technology, has resulted in a need for fundamentally new image processing algorithms. Specifically, in order to display relatively low quality content on such high resolution displays, the need for upscaling algorithms has become an urgent market priority, with correspondingly interesting challenges for the academic community [1].
In this paper, upscaling methods used in SVC codec are tested on three sequences scalable coded by two codecs: H.264 SVC and WSVC (wavelet based scalable video coding).The paper is organized as follows: Section II describes scalable video coding and types of scalability, Section III describes different spatial upscaling methods, Section IV gives the overview of experimental setup and results while in Section IV conclusions of the research are presented. II. Scalable video coding SVC is scalable extension of H.264/AVC standard where coded bit stream contains several layers. This extension enables spatial, temporal and quality scalability with the slight bit rate increasing in comparison to H.264/AVC codec. Scalable coded bit stream consists of one base layer and several enhancement layers. Each of them increases the quality, but also a bit rate of coded material [2]. Scalable video coding for experiments made in [3] is done with JSVM 9.18 [4] reference software. By WSVC codec a spatio-temporal decomposition using wavelet transform is done thus ensuring possibility of spatial and temporal scalability. Using the motion estimation, motion information used for computing wavelet coefficients is given. Compressed bit stream consists of several layers. In experiments made in [3] the method from [5] is used. Bit stream consists of 5 temporal layers, 3 spatial layers and 21
several quality layers [2]. The most important requirements for scalable video coding standard to become successful are coding efficiency and complexity in terms that new tools should be added only if they are needed for this kind of requirements [6]. Temporal Scalability If the set of access units of a bit stream is divided into set of temporally based layers, it is called temporal scalability. The layers are consisted of one base layer with several temporal enhancement layers [7]. The counting of layers is identified with temporal layer identifier T which starts with 0, referred as base layer and increasing by one for the next enhancement layers. Hybrid video coding can be enabled by using motioncompensated prediction with reference pictures that have temporal layer identifier less or equal to that of the picture to be predicted. Also, every picture can be used as reference picture and used for prediction. Generally, temporal scalability enables one bit stream to uphold multiple frame rates. These multiple frame rates are enabled by using different prediction structures [6]. Spatial sclability For supporting spatial scalable coding, SVC follows the conventional approach of multilayer coding, which is also used in H.262 MPEG-2 Video, H.263, and MPEG-4 Visual. Each layer corresponds to a supported spatial resolution and is referred to a spatial layer or dependency identifier D. The dependency identifier D for the base layer is equal to 0, and it is increased by 1 from one spatial layer to the next. In each spatial layer, motion-compensated prediction and intra- prediction are employed as for singlelayer coding. But in order to improve coding efficiency in comparison to simulcasting of different spatial resolutions, additional socalled inter-layer prediction mechanisms are incorporated, as illustrated in Fig. 1. [7]. In order to restrict the memory requirements and decoder complexity, SVC specifies that the same coding order is used for all supported spatial layers. The representations with different spatial resolutions for a given time instant form an access unit and have to be transmitted successively in increasing order of their corresponding spatial 22
layer identifiers. But as illustrated in Fig. 1, lower layer pictures do not need to be present in all access units, which makes it possible to combine temporal and spatial scalability. Quality scalability Quality scalability can be considered as a special case of spatial scalability with identical picture sizes for base and enhancement layer. This case is supported by the general concept for spatial scalable coding and it is also referred to as coarsegrain quality scalable coding (CGS). The same inter- layer prediction mechanisms as for spatial scalable coding are employed, but without using the corresponding upsampling operations and the inter-layer deblocking for intra-coded reference layer macroblocks. Furthermore, the inter-layer intra- and residual-prediction are directly performed in the transform domain. When utilizing inter-layer prediction for coarsegrain quality scalability in SVC, a refinement of texture information is typically achieved by requantizing the residual texture signal in the enhancement layer with a smaller quantization step size relatively to that used for the CGS layer. However, this multilayer concept for quality scalable coding allows only a few selected bit rates to be supported in a scalable bit stream. Especially for increasing the flexibility of bit stream adaptation and error robustness, but also for improving the coding efficiency for bit streams that have to provide a variety of bit rates, a variation of the CGS approach, which is also referred to as medium-grain quality scalability (MGS), is included in the SVC design. The differences to the CGS concept are a modified high level signaling which allows a switching between different MGS layers in any access unit, and the so- called key picture concept, which allows the adjustment of a suitable tradeoff between drift and enhancement layer coding efficiency for hierarchical prediction structures. With the MGS concept, any enhancement layer NAL unit can be discarded from a quality scalable bit stream, and thus packet-based quality scalable coding is provided [7].
Figure 1. Multilayer structure with additional inter-layer prediction for enabling spatial scalable coding [7]
III. Upscaling methods During the last decade different upscaling methods are proposed in the literature. In [1] Takeda et al. presented the upscaling methodology that is based on the notion of consistency between the estimated pixels, which is derived from the novel use of kernel regression [8], [9]. They proposed a framework which encompasses both video denoising, spatiotemporal upscaling and super resolution in 3-D. The above mentioned methodology is based on the concept of Steering Kernel Regression (SKR), earlier introduced in [9] for 2-D signals. In [10] Ayvaci et el. proposed a new examplebased video upscaling technique that exploits self-similarity among patches of a video in both space and time. They encoded image patches with over-complete dictionaries constructed in a local spatio-temporal neighborhood, and established correspondence using modern optical flow techniques. The resulting method performed favorably in comparison with the state-of-the-art in super-resolution techniques. In [11] Protter et al. explored a super-resolution algorithm of similar nature that allows processing sequences with general motion patterns. Their solution is based on the Nonlocal-Means (NLM) algorithm. They showed how this denoising method is generalized to become a relatively simple super-resolution algorithm with no explicit motion estimation. Results on several test movies showed that the proposed method is very successful in providing superresolution on general sequences. In [12] Protter et al. presented a new framework that ultimately leads to the same algorithm as in their prior work [11]. In this paper the suggested approach is much simpler and more intuitive, relying on the classic Super-resolution reconstruction (SRR) formulation, and using a probabilistic and crude
motion estimation. The new approach also offers various extensions not covered in their previous work, such as more general re- sampling tasks. In [13] Freedman et al. proposed a new highquality and efficient single-image upscaling technique that extends existing example-based super-resolution frameworks. In their approach they didn’t rely on an external example database or usage the whole input image as a source for example patches. Instead, they followed a local self-similarity assumption on natural images and extracted patches from extremely localized regions in the input image. This allowed them to reduce the nearest-patch search time considerably without compromising quality in most images. Tests that they have done showed that the local self similarity assumption holds better for small scaling factors where there are more example patches of greater relevance. They implemented these small scalings using dedicated novel non-dyadic filter banks, that they derived based on principles that model the upscaling process. They demonstrated the new method ability to produce high-quality resolution enhancement, its application to video sequences with no algorithmic modification and its efficiency to perform real-time enhancement of low-resolution video standard into recent high-definition formats. In [14] Ebrahimi et al. introduced a novel super-resolution scheme for multi- frame image sequences. Their method is closely associated with the recently developed „non-local-means denoising filter�. In the proposed algorithm, no explicit motion estimation is performed, unlike in many other methods. Their results are comparable, if not superior, to many existing approaches, especially in the case of low signal-to-noise ratio. Some simple upscaling methods are incorporated in H.264/AVC/SVC reference software. The first method (M1) is normative upsampling method which is designed to support the Extended Spatial Scalability. It is based on a set of 4-tap filters. These integer-based 4-tap filters are originally derived from the Lanczos-3 filters. In this method any inter-layer scaling ratios, which can also be different in horizontal and vertical directions, are supported. In the second method (M2) only dyadic rescaling 23
ratios are supported. The upsampling is realized via several dyadic stages. By default, in each stage, every second sample in horizontal and vertical direction is presented by the samples of the input image, and the missing luma samples are interpolated using the AVC half-sample interpolation filter with coefficients {1, -5, 20, 20, -5, 1}/32. The missing chroma samples are interpolated using the „very simple� filter with coefficients {16,16}/32. The third upsampling method (M3) is achieved applying the threelobed Lanczos-windowed sinc functions. Any interlayer scaling ratios, which can also be different in horizontal and vertical directions, are supported. In the fourth method (M4) a combination of the AVC half-sample filters and a bi-linear filters is used [15]. Those four methods are used in our experiments. IV. Experimental setup and results In the research presented in this paper we made quality comparison of the video sequences upscaled by different upscaling methods. For this experiments the scalable coded video sequences from database available at [3] are used. This database contains three different raw sequences: IntoTree, DucksTakeOff and ParkJoy. Sequences differ by spatio- temporal activity which is measured by Spatial perceptual information (SI) value, Temporal perceptual information (TI) value and the product of SI and TI (SITI) (Table I) [2]. One frame from each of those three sequences is presented in Fig. 2. They are coded using two different codecs: H.264 based scalable video coding (SVC) and wavelet-based scalable video coding (WSVC) using combination of spatial an temporal scalability on two different resolutions (320x180 and 640x360) and three different frame rates (6.25, 12.5 and 25 frames per second). After the encoding, spatial upscaling on the original resolution of 1280x720 pixels and temporal upscaling on 50 fps is done. Temporal upscaling is done by repeating frames 8, 4 or 2 times depending on frame rate of coded material, while spatial upscaling is done using 4 mentioned methods (M1, M2, M3 and M4) that are available in scalable extension of H.264/ AVC codec. After upscaling the sequences quality 24
is evaluated using three different objective video quality assessment algorithms: Peak signalto-noise ratio (PSNR) [16], Video quality metric (VQM) [17] and Multi-scale structural similarity (MS-SSIM) index [18]. After that the comparison of upscaled sequences quality using these four upscaling methods is done. In the experimental part of the paper, the quality of upscaled sequences is measured using objective video quality assessment algorithms. For objective evaluation, resolution and frame rate of reference and test material have to be the same. TABLE I. SPATIAL AND TEMPORAL ACTIVITY VALUES: SI, TI AND SITI=SI*TI Sequence SI IntoTree 7,44 DucksTakeOff 13,28 ParkJoy 16,32
(a)
(b)
TI 18,64 23,18 42,27
SITI 138,68 307,83 689,85
as it is expected, higher quality is achieved for sequences that were coded on 640x360 pixels resolution than for sequences that were coded on 320x180 pixels resolution. Anyway the best results are achieved for sequences upscaled
(c) Figure 2. Sample frames from the test sequences: (a) IntoTree (b) DucksTakeOff (c) ParkJoy
Because of that test materials are upsampled to 1280 x 720 pixels resolution, and their frame rate is set to 50 fps. When all tested sequences are spatially and temporally upscaled, objective video quality measurements are done using PSNR, VQM and MS-SSIM algorithms. These algorithms are chosen because they achieved the highest correlation with subjective results for scalable coded sequences in [2].
Figure 3. Results for PSNR metric, resolution 320x180
For the comparison that is done, sequences are firstly divided by codec and than by resolution. Only the influence of spatial upscaling on quality of video sequences is tested, since the temporal upscaling is done on the same way for all sequences. The results are presented in Fig. 3-8. For only SVC coded sequences, it is obvious that for both tested resolutions and for all three quality metrics the best results are achieved by M1 method. It is necessary to say that for PSNR and MS-SSIM metric higher value correspondes higher quality of tested sequence while for VQM metric it is conversely. There is significant difference only between the sequences uspcaled using M1 method and sequences upscaled using all other (M2, M3 and M4) methods, i.e. all other methods achieve very similar results for SVC codec. For WSVC codec the sequences upscaled using the M1 method achieves lower quality than sequences upscaled using all other methods. If only M2, M3 and M4 methods are compared, it can be concluded that overall the best results are achieved by M2 method. If the results for the same metrics and different resolutions are compared there are obvious differeces between two codecs. For SVC codec,
Figure 4. Results for MS-SSIM metric, resolution 320x180
Figure 5. Results for VQM metric, resolution 320x180
25
upscaled sequences coded on 320x180 resolution is higher than for sequences coded on 640x360 resolution. It can be concluded that for upscaled sequences coded using WSVC codec, resolution of the coded sequences is not so important factor as it is for SVC codec.
Figure 6. Results for PSNR metric, resolution 640x360
Figure 7. Results for MS-SSIM metric, resolution 640x360
Figure 8. Results for VQM metric, resolution 640x360
with the M1 method, while all other methods present similar but lower quality results. For WSVC codec, the quality difference between the upscaled sequences coded on different resolutionsis is not so significant as for SVC codec. The best results are achieved for sequences upscaled with the M2 method. It is interesting that in some cases the quality of the 26
It can also be conluded that for the same resolution and different metrics, there is different deviation for different codecs. Relative deviation of the results for WSVC coded sequences are pretty higher than for SVC coded sequences. V. Conclusion Since the multimedia applications are increasingly used, the efficient video transmission is very important. Therefore scalable video coding is more often used. There are three types of scalability: spatial, temporal and quality scalability. For spatial scalability, sequences are coded on lower resolutions than the original sequences and therefore they have to be upscaled before displaying. Four different upscaling methods are compared based on the quality of upscale sequences. Sequences coded on lower resolutions are upscaled to the original resolution of 1280x720 pixels and than they are evaluated using three different objective video quality assessment algorithms. Although neither of objective metrics performs consistently good results for scalable coded video materials, all of them indicate that for sequences coded using SVC the best resuts are achieved for the M1 method while for sequences coded usng WSVC the best results are presented for the M2 method. The resolution on which sequences are coded also has an influence on the results of objective metrics. For sequences coded with SVC codec at higher resolution the quality assessment results are better than for sequences coded at lower resolution with the same codec. For WSVC codec the resolution doesn’t have so big influence on the quality of upscaled sequences. Because of increased using of scalable coded video materials, the new upscaling methods which may cause even higher quality of upscaled sequences will be the subject of our future work.
References
[1.] H. Takeda, P. Milanfar, M. Protter and M. Elad, “Super-resolution without explicit subpixel motion estimation”, IEEE Transactions on Image Processing, Vol.18, Issue 9, Sept. 2009. [2.] D. Vranjes, D. Zagar, O. Nemcic, “Comparison of objective quality assessment methods for scalable video coding”, Proceedings Elmar
2012, Sept. 2012., pp. 19-22
[3.] J. S. Lee, F. De Simone and T. Ebrahimi, „Subjective quality evaluation via paired comparison: Application to scalable video coding”, IEEE Transactions on Multimedia, Oct. 2011., Vol. 13, No. 5, pp. 882-893, [4.] J. Reichel, H. Scwarz and M. Wien, Joint Sclable Video Model 11 (JSVM 11), Joint Video Team, 2007, doc. JVT-X202 [5.] N. Ramzan, T. Zgaljic and E. Izquierdo, „An Efficient optimisation scheme for scalable surveillance centric video communiacations”, Signal Processing: Image Communication., 2009., Vol. 24, No. 6, pp. 510-523, [6.] A. Bjelopera, S. Grgic, “Scalable video coding extension of H.264/AVC”, Proceedings Elmar 2012, Sept. 2012., pp 7-12 [7.] H. Schwarz, D. Marpe, T. Wiegand, “Overview of scalable video coding extension of the H.264/AVC standard”, Ieee Transactions on Circuits and Systems for Video Technology, Vol. 17, No. 9, September 2007, pp. 1103-1120
[12.] M. Protter, M. Elad, “Super resolution with probabilistic motion estimation”, IEEE Transactions on Image Processing, 2009. Vol. 18., No. 8 [13.] G. Freedman, R. Fattal, “Image and video upscaling from local self- examples”, ACM Trans. Graph. 28, 4, Article 106, 2009 [14.] M. Ebrahimi, E.R. Vrscay, “Multi-frame superresolution with no explicit motion estimation”, Proceedings of the 2008 International Conference on Image Processing, Computer Vision and Pattern Recognition (IPCV 2008), 2008. [15.] J. Reichel, H. Scwarz and M. Wien, Joint Sclable Video Model 11 (JSVM 11), Joint Video Team, 2007, doc. JVT-X202 [16.] D. M. Chandler and S. S. Hemami, „VSNR: A wavelet-based visual signal-to-noise ratio for natural images”, IEEE Transactions on Image Processing, Sep. 2007, Vol. 16, No. 9, pp. 22842298, [17.] M. H. Pinson and S. Wolf, „A new standardized method for objectively measuring video quality”, IEEE Transactions on Broadcasting, Sep. 2004, Vol. 50, No. 3, pp. 312-313, [18.] Z. Wang, E. Simoncelli and A. Bovik, „Multiscale structural similarity for image quality assessment”, in Conference Recommendation. 37th Asilomar Conference Signals, Systems and Computers, 2003., Vol. 2, pp 1398-1402,
[8.] J. V. D. Weijer, R. V. D. Boomgaard, “Least squares and robust estimation of local image structure”, Scale Space, International Conference, 2003., Vol. 2695, No. 4, pp 237-254 [9.] H. Takeda, S. Farsiu, P. Milanfar, “Kernel regression for image processing and reconstruction”, Ieee Transactions on Image Processing, 2007., Vol. 16, No. 2., pp 349-366 [10.] A. Ayvaci, H. Jin, Z. Lin, S. Cohen, S. Soatto, “Video upscaling via spatio-temporal self similarity”, International Conference on Pattern Recognition (ICPR), 2012 [11.] M. Protter, M. Elad, H. Takeda, P. Milanfar, “Generalizing the nonlocal- means to superresolution reconstruction”, IEEE Transactions on Image Processing, 2009., Vol 18, No. 1
27
BLENDED LEARNING IN ADULT EDUCATION IN THE ÓBUDA UNIVERSITY Franciska Hegyesi Óbuda University/Kandó Kálmán Faculty of Electrical Engineering, Budapest email: hegyesi.franciska@kvk.uni-obuda.hu
Abstract: Taking into consideration the changes in higher education teaching, is paramount that the teachers receive the most up to date training. Lately, the increasing number of students is parallel with the difference in age of our students, which accentuates the need for lifelong earning. Training mature students has proven the need to different approach to teaching, in comparison to traditional teaching. Besides the lectures during the day, a bigger role is played by distance learning and e-learning. The teachers pedagogical knowledge and motivation, in any teaching institution is the most important. This article is about educating mature students, the role of andragogy and the electronic learning system, Moodle introduced in our institution. These instructions give you basic guidelines for preparing camera-ready papers for conference proceedings. I. Introduction Adults continuing further education or considering a career change, has created new cliental for educational institutions. These changes have an effect on the network, management, androgology and pedagogy of higher education. Higher education for mature students includes evening classes and distance learning, in order to complete postgraduate courses however in this category can be included young adults who’s further education can be continued in full-time education.
Figure 1. The age distribution of students in higher education Source: Ministry of National Resources, 2011, manuscript, own edition
Many higher educational institutions recognized the need to broaden the traditional teaching services by introducing distance learning and other alternatives in the hope to attract increasing numbers of mature students.
(as shown in Figure 1, this effort was in no avail). According to the data presented the number of mature students, who have already broad work experience is higher than the number of young adults. The great challenges of adult education includes the expansion of technological development, managing time devoted to study, pressure and demands on individual student. These students need to choose the most effective and efficient methods to fulfil the academic preparation for their chosen career. Teaching methods at universities are still far too rigid. Teaching is taking place from the front of a theatre while students are taking notes. University lecturers are highly qualified specialist, in their field, preparing the next generation of specialists, however these lectures from pedagogical point of view, are unqualified. II. Teacher in higher education The Leuvens Declaration in 2009 has accentuated the need for its lifelong learning policy in higher education. The implementation of this policy has been supported not only by international organizations as UNESCO & OECD, producing slogans and action programs but also the increasing need of high expectations in everyday work life.[1] Specialists wanting to keep their positions need to constantly keep up 29
to date with the developments in their filed. In today’s fast developing world, learning cannot be restricted only to children and young adults, it has to continue throughout adulthood. In 2011 our university was having 11 870 listeners with nearly 50% participate in the form of a distance- learning.
Figure 2. Student numbers in Obuda University Source: Ministry of National Resources, 2011, manuscript, own edition
Mature student’s learning combines with other factors of their life, like work, supporting their families, socialising, free time which has an impact on their learning, being so much different than their previous school system way of learning. The distractions for adult students implies the need for positive attitude and motivation which will make their learning process more rewarding and a successful activity. The new leaning culture will change significantly the role of teachers and lecturers. The new teaching methods require not only high motivation, but also building on previous knowledge and finding methods which will build on the trainees existing professional competence. Constant monitoring and assessment are paramount for the trainees development, in combination with regular feedback and advice on how to improve. The teachers methodological knowledge is based on one hand on his/her previous knowledge and preparation as a teacher, on the other hand his/her capability of assessing the trainees ability and use this in selecting the best teaching method in order to succeed. This concept is hard to achieve in higher education because the focus is on the content rather than pedagogical aspect of delivering the syllabus. The lecturer feels comfortable in his/her role as an engineer rather than a teacher. This has 30
been proven by the answers given by the students asked the following question: „ In your opinion do you find that the lectures who prepares you for your engineering career is more comfortable in his/her role as an engineer or as a educator?” In the new learning culture will change significantly the role of teachers and trainers. In lifelong learning age the adult educators must make conscious of that there is double role: facilitators and educators.
Fig. 3. Students opinion regarding the role of the lecturer
Fig. 4 Students who think that the lecturer has more the role of engineer than a educator.
If we look at the educator’s profession we should take into consideration two different aspects, beside the role of the educator we have to consider his/her subject knowledge and the way he/she is able to put across the syllabus. This skill seems to be much more important for the students rather than their lecturer’s qualifications.
Fig. 5 In your opinion considering the above aspects of a lecturer which is the most important?
The effectiveness of the educator’s job is determined by how confident he/she feels in the role of educator, whether he/she has concerns about being put in an embarrassing situation where he/she could lose the respect of students. The skills required for being a good educator is vast, they need to have a variety of behaviour patterns ready and the flexibility to choose the appropriate way of reacting in any
unexpected situation. III. Adults in higher education Most mature students are continuing their studies through distance learning alongside work, therefore their leaning needs are different compared to those who are in full time education. I have asked distance learning students what do they find the most difficult. Surprisingly they find more difficult the time factor rather than the syllabus.
Fig. 6 Choosing the time and place of the course
I was thinking that the time factor is not the only one when it comes to distance learning, so I asked students what kind of support they expect from the university. The answer of most students was that the quality and range of information provided by universities should be higher. Students are using the world wide web besides the information provided by universities. [2] In technical subjects besides the theoretical knowledge, developing someone’s approach is very important. The approach can be developed with well planed and even better executed lectures, and for this reason many educators do not see the need for the introduction of an online learning system. In teaching other subjects this method has been successfully used, therefore the technical educators are said to be „ old fashioned and nonchalant”. We are certain that to prepare the next generation of specialist the online system is necessary, so that we can be competitive with other higher educational institutions. The Moodle system was introduced at Budapest Technical College (Budapesti Muszaki Foiskola) in 2006. We are proud to announce that the introduction of the Moodle system in our institution has been a success. The transition, at first wasn’t easy, but the commitment, determination and hard work of our staff has made it possible. The number of our courses are constantly
increasing, we are able to offer more than 300 and the number of students is more than 5000. The network is mostly used to inform students about the lectures and tasks they have to complete and also to set and submit assignments. IV. Technical back ground When we decided to introduce the system, we knew that a long and difficult road is ahead of us. I knew that the students will not have objections against the idea, however this cannot be said about the staff, therefore putting across the final objective was essential for the introduction of this system. If we don’t have high expectations and clear and concise objectives, further development cannot be expected. [3] Therefore we had to combine the traditional teaching methods and the modern way of learning. We had to take into consideration the technical developments and the needs of our students. [4] The goal was to slowly get the staff on board with this new initiative. This system was design for the whole university, across all subject, the overall objective is to make our student’s preparation as easy and effective as possible. Most courses are run by Rejtő Sándor Faculty of Light Indrustry and Environmental Protection Engineering, they have introduced this system one and a half years ago, and it has been more and more used. The system can be used during lectures taking an important part in the teaching process, is also indispensable for uploading assignments. Due to more and more faculties using the system, during the academic year in more than more occasion the system registered over 110 000 records. The job of tutors is extremely complex, they have to provide their students with tasks which involves high level thinking, linking the students previous knowledge and also taking into consideration students individual needs, and building teamwork activities. The tutors themselves need to be up to date with how the system works. [5] Our university employs 381 lecturers, but only 39 uses the network. I have asked members of staff to complete a questioner regarding why they don’t use the system. 31
Only 78 lecturer completed the questioner, from the should recognise that our role as an educator is just as important as our subject knowledge and qualifications. In the end of the day we are working in an educational institution and we have to have the best possible training so than we can train our students to the best of our ability.
Fig. 7. Fig.7 Why are you not using the system?
Asking about what exactly they use the system for, the majority use it to upload subject related information. They not simply upload a 150 page PDF file, but unit by unit information for students to use gradually.
Fig. 10 How much help were the materials form the system (tests, ppt’s, guides, extracts)
The majority of the students were unhappy about the lack of communication from their tutors, this is due to the reduced number of lecturers who uses the system which stops them from following their students activities on the system. The lecturers answer to students questions only via email.
Fig. 11. Ways of communication used
Fig. 8 The teachers’ activities
The majority of the students have been pleased with the quality of materials uploaded, however with the quality of their tutors work, much less. This is valuable information for us to develop the system further more by concentrating on the weakest points. „ Judging the tutors work within the Moodle system, by participating on courses provided they could follow their student’s progress, give advice when needed and answer questions.”
Fig. 9.How much help tutors
32
The centre of every teaching process is the relationship between the teacher and the student, for those students how are not yet confident in their learning the teacher is indispensable. The job of the teacher is not only to pass on the content of their course but to put the students in situations where they have to apply what they have learnt. The goal of the teacher should be to develop each of his/her students taking into consideration their ability, personality and pace of learning. The teacher has to be a role model for the students, being open minded, putting to the test his/her theories but most importantly being able to acknowledge mistakes. These qualities will make the students open to learning. V. Summary Our goal, is to further encourage our staff to use the system, to help them find their own subject’s interest within the system and also to convince them that this system is not an impersonal platform, but more a tool which will help their students preparation. The secrete of good teaching is based on the right mixture of optimum teaching methods which is used to bring out the maximum of our students. The mixture is right if the answers to the following
questions contain the same or very similar components. • Which teaching method is the best for my subject? • Which teaching method is the best to put across my subject to my students? • Which teaching method is best for my subject from the point of view of my universitie’s expectations and limitation? Let’s not forget, the best cocktail is not the one which contains the mixture of all the drinks we know, but the mixture of those few which complement each other to bring out the best taste, much better than each drink on its own. However there are drinks which work best on their own, like whiskey.
References [1.] The Bologna Process 2020http://www.ond. v l a a n d e re n . b e / h o g e ro n d e r w i j s / b o l o g n a / con ference/documents Leuven_Louvain-laNeuve_Communiqué_April_2009.pdf [2.] Vig Zoltán: Internetes attitűdvizsgálatok a felsőoktatásban In: Megújuló szakképzés szemelvények diplomamunkákból, 2005. BME MPT, pp. 153-183. [3.] Rosenberg, Marc J.(2001): E-learning – strategies for delivering knowledge in the digital age, 2001, McGraw-Hill [4.] Robert A. Ellis, Rafael A. Calvo: Minimum Indicators to Assure Quality of LMSsupported Blended Learning., In: Educational Technology&Society 60-70, 2007 [5.] Ambrusné Dr. Somogyi Kornélia, Pasaréti Otília: EGYETEM LETTÜNK – MERRE TOVÁBB? Az informatika oktatás lépcsői és problémái, Matematikát, fizikát és informatikát oktatók XXXIV. konferenciája (MAFIOK) Szent István Egyetem Gazdasági Kar, Békéscsaba, 2010. augusztus 24-26. DVD melléklet pp. 1-12. – ISBN 978-963-269-201-2
33
DESIGN AND PERFORMANCE ANALYSIS OF RADIOFREQUENCY ADIABATIC NEUTRON SPIN FLIPPERS János FÜZI Pollack Mihály Faculty of Engineering and Information Technology, University of Pécs, Hungary Neutron Spectroscopy Department Wigner Research Centre for Physics, Budapest, Hungary Faculty of Electrical Engineering and Computer Science Transilvania University Brasov, Romania
Abstract: The design method of radiofrequency adiabatic neutron spin flippers is presented by numerical simulation of neutron motion in time- and spatial-coordinate-dependent magnetic fields. Free neutrons behave like tiny gyroscopic compasses and can be employed for investigation of magnetic fields in free space as well as of magnetic structure in matter. Polarized neutron beams are prepared for these experiments and scattering patterns recorded for both spin states of the incoming beam. The common mode effects due to nuclear scattering can be eliminated by subtraction, allowing access to the significantly weaker magnetic scattering. Changing the spin state of neutrons can be performed by means of spin flippers. Adiabatic RF flippers are efficient in case of thermal and cold neutrons, because they require no material in the beam path. Flipping is achieved by combination of orthogonal static gradient and time varying magnetic fields. Design examples and experimental results are presented. Keywords: polarized neutron spectroscopy, adiabatic fast passage, neutron spin flipping I. Introduction Free neutrons are electrically neutral particles that interact primarily with nuclei (adsorption, scattering). Their associated de Broglie wavelength is in the range of atomic distances, thus atomic resolution structural investigations are possible by means of neutron spectroscopy methods [1 -3]. Compared to X-rays they carry relatively low energy at the same wavelength, con¬sequently producing less destruction in the sample. Long free path in most materials and largely varying adsorption cross section provide penetration depth and contrast for radiography and tomography experiments. Moreover, the nuclear nature of neutron-matter interactions makes them isotope sensitive, a fea¬ture exploited by contrast matching and enhancing techniques. Well established experimental techniques exploit the favorable features of thermal (0.5 – 3 Å) and cold (2 – 20 Å) neutron beams: prompt gamma activation analysis provides non-destructive chemical composition determination; elastic scattering (diffractometry, holography): shows where the atoms are in a
structure; inelastic scattering (backscattering, spin echo): shows how the atoms move; small angle scattering: provides access to structural investigations in nm – mm range. Being endowed with angular momentum (spin) as well as magnetic moment, free neutrons behave like tiny gyroscopic compasses and can be employed for investigation of magnetic fields in free space as well as of magnetic structure in matter. Inhomogeneous magnetic field produces force that deflects the neutron path and the neutron would precess around a magnetic flux density vector non-collinear with its magnetic moment. Interactions between neutrons and magnetic fields are much weaker than scattering on nuclei, nevertheless, nuclear scattering can be numerically eliminated from the recorded scattering pattern and magnetic scattering results become accessible.
35
II. Neutron Motion In Magnetic
Field
The influence exerted by external magnetic fields upon free neutrons can be computed based on the equations of motion of a classical magnetic moment s endowed with constant magnitude angular momentum L [4]. These quantities are connected by the giromagnetic ratio g = – 183 MHz/T, according to: s =gL
(1)
Inhomogeneous magnetic field exerts a force upon the neutron: F = (s ⋅ ∇ )B
(2) .
In case the neutron is aligned parallel (s B = sB, , spin-up) or anti-parallel (s.B = – sB, , spin-down) to the magnetic flux density vector B, and the external magnetic field is free of currents ( x B = 0), (2) becomes
F = ± s∇ B
(3)
as shown on Fig. 1a. This effect is exploited in the Stern-Gerlach experiment [5] to separate neutrons of different spin states and by magnetic lenses used for neutron focusing. The magnetic field also exerts a torque on the neutron: T = s×B
The Larmor angular velocity relates to the external magnetic field: (5)
The limiting cases for a neutron flying through magnetic field are: non-adiabatic precession, when the flux density changes direction much faster than the Larmor precession; respectively adiabatic spin rotation, when the flux density change is much slower than the Larmor precession and the neutron magnetic moment follows the orientation of the external magnetic field. The complex motion of neutrons (displacement in space and rotation of spin orientation) is computed in time domain, starting with known values at a given instant t: neutron magnetic moment s, position r, velocity v and flux density B(r,t). Using a small enough time increment Dt the precession angle a and the position increment Dr (6) will also be small enough to consider the acceleration (7)
(4)
causing precession of its orientation around the magnetic flux density vector with the Larmor angular velocity w.
and the angular velocity constant during the time increment: (8) Under these conditions the variation of the neutron mag-netic moment Ds corresponding to the time increment Dt can be computed according to (Fig. 1b):
a
b
Figure 1. a) Forces on aligned neutrons in inhomogeneous magnetic field; b) vectors in the plane orthogonal to the magnetic flux density vector.
36
(9) III. Adiabatic Fast Passage Adiabatic fast passage [6 - 8] offers an efficient way to change the neutron spin state. The change of the neutron spin orientation with
respect to the external magnetic field – spin state flipping – is performed by means of two mutually orthogonal magnetic fields: • A static guide field, with gradient along the neutron path: (1)
magnetic frame is compact; the device is robust, insensitive to the in¬fluence of external magnetic fields or vicinity of ferromagnetic bodies; and the flipper can be joined to the polarizer unit and use the strong magnetic field needed for supermirror (PSM) [9] saturation therein to create the static guide field with gradient.
where x is the coordinate along the neutron flight direction and k the unit vector of the guide field orientation, transverse to the former; • A time-varying flipping field with frequency equal to the Larmor frequency corresponding to the flux density of the gradient field in the middle of the flipping region: (2) In a frame connected to the neutron, rotating by the Larmor angular velocity around the gradient field direction, the gradient field vanishes in the central region and changes sign between the upstream and downstream ends of the flipping region: (3) The neutron spin precesses about Bg + Bf (in the rotating frame adiabatically follows B*g + Bf ) and exits the flipping region in a spin state opposing the one it had at the entrance. The total magnetic flux density along the neutron flight path within the spin state flipping region: (4) is used in design phase and is replaced in performance simula-tions by the sum of the actual, measured static filed profile and RF flux density computed using the Biot-Savart law from measured current density in the coil or measured directly by means of a small probe coil. In this case B2 is also a function of position and the computation region is extended up to the point where it vanishes completely. IV. Rf Flipper Design Tuning the magnetic field gradient (Fig. 2) by permanent magnets, soft magnetic yokes and shunts provides the following advant¬ages: the
Figure 2. Gradient field profiles in the ANSTO Platypus flippers.
The RF coil and power supply are designed such that the resonance frequency – defined by the coil and an impedance matching capacitor corresponds to the static gradient mag-netic field value in the middle of the ramp. The necessary RF field magnitude results from the condition that the flipping process is completed during the time the fastest neutrons in the range of interest pass through the device. The importance of frequency tuning is revealed by the simulation results plotted in Fig. 3. It can be shown that increasing the RF current magnitude can not compensate for the frequency offset. Should there be circumstances that affect the gradient field profile (like the stray field of strong sample environment cryomagnets), the frequency also must be adjusted accordingly, by connecting various capacitors into the resonant circuit. V. Neutron Beam Test Of Rf Flipper
Operation
The polarizing system of a typical neutron reflectometer is shown on Fig. 4 [11, 12]. The 37
Figure 3. Simulation of neutron spin along field component evolution while passing through the flipper. The neutron wavelength is 2.5 Å.
supermirror of a transmission polarizer selects the neutrons of the unpolarized beam according to their spin state: spin-up neutrons are deflected by the polarizer and adsorbed, spin-down neutrons cross the mirror (on Fe/Si multi¬layer on transparent for neutrons Si wafer substrate). When on, the polarizer flipper changes the spin state of transmitted neutrons from spin-down to spin-up. The analyzer supermirror separates the reflected beam into spin-up (reflected) and spin-down (transmitted) components. A postsample flipper allows identification of real spinflip events due to scattering by the magnetic structure of the sample from those caused by instrum¬ent imperfections.
reflection angle), above which neutrons are transmitted, regardless of spin state.
both flippers off
Figure 4. Spin state controlling components of the ANSTO Platypus polarized neutron reflectometer.
The method of flipper efficiency evaluation relies on time of flight energy sensitive recording of the neutron beam [10] passing through a polarizer, incident side flipper, downstream flipper and analyzer (like a reflectometer without sample). The spin-up neutrons deflected by the polarizer are adsorbed. Both the reflected and transmitted components exit the analyzer and reach the detector. The polarizing supermirrors are efficient below a critical momentum transfer (above a critical wave¬length for a given 38
polarizer flipper on, analyzer flipper off Figure 5. Images and spectra of polarized beam.
A global polarization factor – the product of PSM and flipper efficiencies – is determined according to: reflected beam: (1) transmitted beam: Fig. 5 shows the beam images and spectra of the trans-mitted and reflected beam in two of the four measured flipper combinations (polarizer/ analyzer flipper on/off: 1/0). The product of the two flipper efficiencies results as: (2) The transmitted beam is bright when both flippers are in the same state (and both PSM transmit), the reflected with one flipper on and one off (the analyzer PSM operates in reflection mode). VI. Conclusions Numerical computation of neutron motion in magnetic fields and specifically the simulation of adiabatic fast passage can be performed considering the neutron to be a classical gyroscopic compass. Adiabatic radiofrequency neutron spin flippers can reach flipping efficiencies close to unity as forecasted by simulations and proven by experiment. An important advantage of the method is that it does not require any material in the beam path (would cause unwanted scattering) The flipping region (with significant RF magnetic field, slightly larger than the RF coil length) has to be long enough to allow flipping of the fastest (lowest wavelength) neutrons in the desired energy range. The magnetic flux density at the lowest limit of the gradient field needs to be large enough to provide preservation of the spin state (guide field).
gradient field, thus higher frequency is required by the AFP resonance condition and also larger RF current magnitude is necessary for flipping (both requirements lead to higher voltage on the balancing capacitor). Therefore an optimal, simultaneous tuning of the magnetic fixture and the RF resonant circuit is beneficial in the design phase. Robust (unaffected by stray fields of neighboring devices and/or vicinity of ferromagnetic parts) polarizer/analyzer + flipper units can be built using the magnetic fixture applied for polarizing supermirror saturation, extending its iron yoke and tuning the field gradient by means of appropriately shaped and positioned yokes.
References
[1.] Shull CG, Wollan EO: X-Ray, Electron, and Neutron Diffraction, Science 23 (1948) 69-75. [2.] Brockhouse BN, Hurst DG: Energy Distribution of Slow Neutrons Scattered from Solids, Phys Rev 88 (1952) 542 – 547 [3.] Bacon GE: Neutron diffraction, Clarendon Press, Oxford, 1962 [4.] Mezei F: Neutron spin echo: A new concept in polarized thermal neutron techniques, Zeitschrift für Physik 255 (1972) 146-160 [5.] Gerlach W, Stern O: Das magnetische Moment des Silberatoms, Zeitschrift für Physik 9 (1922) 353–355. [6.] Bloch F: Nuclear Induction, Phys Rev 70 (1946) 460 - 474. [7.] Hautle P, van den Brandt B, Konter JA, Mango S: Polarization reversal by adiabatic fast passage in various polarized target materials, NIM A 356 (1995) 108-110. [8.] Holley AT, Broussard LJ, Davis JL, Hickerson K, Ito TM, Liu C-Y et al: A high-field adiabatic fast passage ultracold neutron spin flipper for the UCNA experiment, Rev Sci Instrum 83 (2012) 073505. [9.] Mezei F: Novel polarized neutron devices: supermirror and spin component amplifier, Commun Phys 1 (1976) 81-85.
Higher gradient in conjunction with the guide field lower limit means higher mean value of the 39
[10.] Füzi J: Neutron beam phase space mapping, In: Modern Developments in X-Ray and Neutron Optics (Springer Series in Optical Sciences, 137), Eds: Erko A, Krist TH, Idir M, Michette AG, Springer Berlin (2008) 43-57. [11.] Saerbeck T, Klose F, Le Brun AP, Füzi J, Brule A, Nelson A, Holt SA, James M: Polarization “Down Under”: The polarized time-of-flight neutron reflectometer PLATYPUS, Rev Sci Instrum 83 (2012) 081301. [12.] Bottyán L, Merkel D G, Nagy B, Füzi J, Sajti Sz, Deák L, Endrőczi G, Petrenko AV, Major J: GINA-A polarized neutron reflectometer at the Budapest Neutron Centre, Rev Sci Instrum 84 (2013) 015112.
40
REMOTE ENVIRONMENTAL NOISE MONITORING USING WIRELESS MULTIMEDIA SENSOR NETWORKS Goran Horvat, Damir Šoštarić and Drago Žagar Department of Communications Faculty of Electrical Engineering, J. J. trossmayer University of Osijek Osijek, Croatia email: goran.horvat@etfos.hr
Abstract: Remote Environmental Noise Monitoring presents an interesting approach in monitoring phonic pollution for large areas. By the Environmental noise directive defined by the European Union in 2002 the phonic pollution needs to be reduced or prevented in order to minimise harmful effects caused by the environmental noise. To reduce environmental noise the main prerequisite is the continuous monitoring of the noise throughout time in multiple locations that can be problematic in dense urban or vast rural areas. Also, since noise is a multimedia signal it needs to be adequately processed and transmitted to the destination. In this paper we propose a system capable of transferring noise data from large number of sensors by the means of Wireless Multimedia Sensor Network (WMSN). By implementing WMSN in environmental noise monitoring the need to set up additional infrastructure is eliminated and the ability to send multimedia data is assessed and enabled. The proposed system has the ability to host large number of WSN nodes enabling the propagation of the noise data from diverse locations. By implementing multimedia processing within WMSN nodes the processing of the data is distributed within the network, eliminating the centralised processing large amounts of data. The processing is performed by using FFT spectrum max- hold, transmitted every delta T seconds to the network coordinator. Keywords: environmental noise monitoring, WSN, WMSN, Wireless Multimedia Sensor Network, FFT, embedded system. I. Introduction With the increase in world population and the increase in the number of vehicles on the roads, the increase in environmental noise and phonic pollution is getting more and more expressed. With the increase in the surrounding environmental noise, the noise itself has become a worldwide problem not only for urban but for rural areas as well [1]. Consequently, developing new and innovative intelligent systems for noise monitoring is of outmost importance not only for the researchers devoted to this subject but to the population in a whole, since adequate noise monitoring has the potential to reduce the noise by various means. However, in order to control traffic dynamically an infrastructure that supports noise monitoring has to be present. The problem here is the inability to track the induced noise by the traffic in real time, implemented on a large urban or
rural area. This is mainly contributed to the large areas that require noise monitoring and the expensive equipment that needs to be set up and interconnected. On the other hand, availability of low cost multimedia devices such as small microphones and CMOS technology has accelerated the development of Wireless Multimedia Sensor Networks (WMSN’s) and enabled their application in many areas [5, 6]. One of the areas of WMSN application includes noise monitoring by the means of sampling audio signals and providing analogue to digital conversion [7]. However, since multimedia audio signals requires larger bandwidths for data transfer this presents a problem for large scale WMSN networks. To circumvent this problem, this paper proposes the implementation of multimedia processing within WMSN nodes and the transfer of only relevant data 41
to the destination. This is accomplished by implementing Fast Fourier Transform (FFT) within the WMSN node and recording spectrum max-hold, reducing the load on the central server by distributing data processing within WMSN nodes. This approach results in less network congestion than sending of raw data [7] and on the other hand provides more information than standard noise level measuring with proprietary sensors [8]. Throughout this paper we will demonstrate the hardware and software design of a WMSN node alongside with the WMSN architecture. Also, we will show that existing WSN equipment can be used for the transfer of multimedia data without the need to implement additional hardware or software on a network layer. Finally, the testing of the proposed system would be carried out with the comparison against previous work, emphasizing the advantages of the proposed system. The paper is organised as follows. Section II introduces basic concepts regarding environmental noise monitoring and Wireless multimedia sensor networks with related work from other authors. Section III depicts the architecture of the system both hardware and software components and finally in Section IV the testing of the system is shown. Section V gives the conclusion and guidelines for future work. II. Environmental nose monitoring and wmsn
Environmental noise can be described as a noise in summary of noise from transport, industrial and recreational activities. According to [11], the main target of the 2002/49/EC directive is an integrated noise management. In the first step the competent authorities in the European member states had to produce strategic noise maps for major roads, railways, airports and agglomerations. The second step is to inform and consult the public. The third step is producing local action plans to reduce noise. When it comes to the reduction of the environmental noise a reliable information 42
regarding noise must be present in real time in order to plan the noise reduction. One idea of reducing noise is the reduction in traffic noise by the means of redirecting traffic flow. For instance, halving the traffic flow on a residential street with light traffic may reduce noise by 3 dB. Yet the number of redirected vehicles could be quite small and easily absorbed in neighbouring roads built purposely to take higher traffic flows. Even though the redirection of traffic from one road reduces the noise on that road, it increases the noise on the other roads. However, due to the fact that noise is logarithmically related to the traffic volume, the reduction in noise on the first road is much greater than induction of the noise on the redirected road. This gives an advantage when it comes to the noise reduction using dynamic traffic redirection [2]. WSN
and
WMSN
in environmental noise
monitoring
In order to enable the remote monitoring of the environmental noise an infrastructure needs to be established for the soul purpose of relaying noise data. This can be a
Figure. 1. An example of a Wireless Sensor Network interconnected with the server throughout Internet network and a Gateway.
costly solution if an entire infrastructure needs to be setup on a large area. One solution for the presented problem is the implementation of a Wireless Sensor Network (WSN) that can host hundreds of nodes, interconnected in a mesh network and is presented as a cost effective solution (Figure 1). This network can support relaying of data from multiple nodes to a central node or to a backbone network (such as existing infrastructure IP networks). However, since WSN is namely intended for the transmission of short data streams it is inadequate for the
transfer of large multimedia content (such as audio noise streams). For this purpose a new concept of Wireless Multimedia Sensor Network (WMSN) is introduced to accommodate for the multimedia requests in Wireless Sensor Networks [5]. The development of WMSN is mainly attributed to the wide range of low cost multimedia devices available such as small microphones and low cost CMOS systems [6]. These multimedia devices can capture multimedia content and transmit the data throughout the WMSN. Data in WMSN is not only scalar data but also multimedia data such as images video and audio data that needs to be effectively propagated throughout the network. In order to facilitate multimedia streaming the bandwidth of a WMSN must be broader than in WSN, resulting in more data throughput. This can be problematic since most WSN standards define data rates and bandwidths of use, so it is questionable whether the existing technologies (such as ZigBee alongside with IEEE802.15.4) can accommodate for multimedia requests. If analysed from the perspective of the environmental noise monitoring, the audio data needs to be transferred from hundreds of nodes to the server, exceeding existing WSN bandwidths several times. However, since environmental noise monitoring does not require real time audio data it is possible to reduce the load on the network bandwidth by the means of processing multimedia data within the WMSN node, relaying only audio metadata. In this manner the processing of the audio signal is distributed within the network and only the information relevant for the monitoring is transmitted to the server. Related work Related work on this subject is versatile and encompasses wide area from applied acoustics to wireless sensor networks. However, most of the related work does not encompass a wide approach to environmental noise monitoring. The authors in [8] propose a noise data acquisition system using wireless networks where the wireless component is established by GPRS/EDGE modems and the noise sensor is a proprietary device interfaced using RS232
communication protocol. This presents a very ineffective system as every node needs to have a modem with a SIM card and an expensive noise sensor that can transmit only one data (noise level). Also, every data transmission is subject to payment by the operator which results in a costly solution. Next on, the authors in [4] descries a continuous monitoring system for noise map validation, however the authors lack a detailed system description. From the work in the paper it can be concluded that the monitoring stations were not interconnected but instead are only equipped with some sort of data acquisition and logging tools. On the other hand, the paper [7] presents an interesting approach in acoustic monitoring system whereas WSN nodes monitor the surroundings an on a sound interrupt start recording audio. This approach however being interesting is useless in noise environmental monitoring since a constant information regarding noise must be present, even if the noise is not crossing any threshold. Also, as the nodes are sampling data large number of bytes is being transferred throughout the network, so the network is easily congested. This approach is ineffective in environmental noise monitoring since real time audio data is not required to access the noise level. This can be seen from [10] where a spectral approach in traffic noise is taken to access the overall environmental noise. Here authors model the traffic noise according to number of vehicles, but instead of modelling overall environmental noise it is possible to monitor the amplitude spectrum of the noise in real time for hundreds of different nodes located within the observed area. This idea is the guideline of our work, whereas the multimedia processing is distributed within WMSN nodes with the functionality of calculating amplitude spectrum using FFT and transmitting the spectrum maximum values to the server every designated time interval. III. Remote
environmental
noise
monitoring system
To circumvent the drawbacks of the existing systems this paper proposes a remote environmental noise monitoring system based on 43
Wireless Multimedia Sensor Network composed of existing WSN modules, functioning as a WMSN. Further on, the processing of the noise audio signal is distributed within WMSN nodes, so the overall load on the server is reduced. Topology of the remote environmental noise monitoring system is shown on Figure 2.
considered as a one way network, where the majority of traffic is sent to the sync – the network coordinator. From this point of a view the network as a backbone is pretty simple in design.
Figure 2. Wireless Multimedia Sensor Network designed for environmental noise monitoring. All WMSN nodes are interconnected with wireless links and data exchange is enabled on links with sufficient link quality
Figure 3. Block diagram of a WMSN node. The node is composed of a microphone, analogue processing circuitry, embedded system (for ADC and FFT processing) and a WSN module.
All nodes have the ability to route data packets, extending the range of the WMSN with every added node. This is very useful in larger networks primarily due to the fact that the network congestion can be regulated on a lower layer, eliminating the need to implement specific routing and congestion control protocols. As seen from Figure 2 the main components of the WMSN are the WMSN nodes and the Network coordinator (connected towards the server). The main task of the network coordinator is to establish the entire network and coordinate the data traffic throughout the network. In the end, the network coordinator functions as a data sink for all WMSN nodes and its responsibility is to direct the data traffic towards the server. Since WMSN nodes acquire multimedia data and do not reproduce it, the network can be
Main components of the WMSN node are the microphone, analogue processing circuitry, embedded system and a WSN module. The functionality of the WMSN node is as follows: the noise audio signal from the environment is converted to electrical signals by the microphone after which the signal is amplified and filtered using analogue processing circuitry. Since the WMSN node has to give emphasis on low complexity, the used analogue processing circuitry uses only one IC (LM368) and a few passive components [12]. The analogue processing circuitry schematic is shown on Figure 4.
44
WMSN node The majority of the network intelligence and data processing is located within WMSN nodes. Main functionality of a WMSN node is to process the noise audio data, prepares noise metadata for transmission and in conjunction with other nodes finds the most effective route towards the sink. This is accomplished by using WSN nodes that acts as WMSN nodes in the network, already integrated with routing protocols on lower layers. With this advantage the WMSN node need not worry about data propagation through the network but only of multimedia data processing. The architecture of a WMSN node is shown on Figure 3.
a) Sampling and FFT processing before implementing optimization of the source code Figure 4. Schematics of the analogue signal processing module. The low complexity of the circuitry is seen from a small number of components, one active and five passive components, including microphone.
Next on, the amplified and filtered analog signal is fed to the embedded system for Analogue to Digital Conversion (ADC) and Fast Fourier Transform (FFT). The embedded system is composed from a single microcontroller, namely Atmel ATxMega128A1 a 16bit microcontroller operating on a 46MHz clock. The choice of the microcontroller is a compromise between high processing power and low cost per unit. The Atmel’s microcontroller fulfills al the prerequisites and presents a good compromise. Also, since this is an advanced series of microcontroller is incorporates a high precision 12bit ADC, representing an ideal candidate for audio processing within a WMSN node. The main functionality of the WMSN node is signal sampling and calculating FFT of the signal. The existing solution based on the sampling and FFT calculation are quite ineffective, as they function in a sequential order (sampling, FFT processing, data transmission) leaving a portion of a time when the audio signal is not sampled. This can be seen from Fig 5 a) where the signal (1) shows the sampling of the audio signal and the signal (2) shows the FFT processing of the sampled signal. To circumvent this problem a code optimization was performed by the means of virtual parallelization using microcontroller’s interrupts. The effects of the implemented optimization can be seen on Figure 5 b).
b) Sampling and FFT processing after implementing optimization Figure 5. Sampling and FFT processing from data acquisition tool. Diagram a) shows the sampling and FFT processing before the code optimization, whereas diagram b) shows the effect of the code optimization by means of virtual parallel processing,
After the optimization we can see that the sampling of the audio signal is now continuous and the processing of the FFT is carried out in parallel to the sampling. This results in a real time FFT calculation and eliminates the possibility to miss a vital portion of the audio signal. By optimising the code a maximum sample rate of 30 kHz was achieved, representing a solid performance for the noise sampling. Since a noise spectrum rarely exceeds 8 kHz [10], the sampling rate was reduced to 22 kHz resulting in a maximum detectable frequency of approximately 10 kHz (according to Nyquist theorem). After each FFT calculation a maximum power spectrum values are stored and this process continues within a time period T (around 10 seconds or more). After time period expires the power spectrum maximum is transmitted towards the WSN module using USART interface (using CTS handshaking) ant the module relays the data towards the server. Since the WSN module is autonomous, the data routes throughout the network are automatically discovered and the data is automatically sent to the coordinator. 45
Server Side (LabVIEW Application) On the server side an important segment of this system is the application that parses the received data and displays the data to the user. The received audio metadata is the power spectrum of the audio signal, so a number of spectral components are equal to NumberOfSamples/2. In the paper we have used an FFT with 512 samples resulting in 256 samples of the FFT power spectrum. These samples are transmitted throughout the network reaching the server where the data is stored. In a large system composed of hundreds of nodes and dozens of network coordinators the network backbone can be an IP network, where the data packets are sent towards the server and the server stores the data. In our testbed we have used a direct serial link from the network coordinator to the server in order to demonstrate the transfer of the power spectrum from the WMSN node to the server. For that manner a LabVIEW application was developed to display the power spectrum, average and maximum values from the power spectrum and graph these values throughout time [15]. Application GUI alongside with the “G” source code is shown on Figure 6.
As seen from Figure 6 the application has the ability to display the last captured audio power spectrum on the top inset and on the bottom inset the average and maximum values are plotted over time. The application accesses the serial COM port where the network coordinator is connected and parses the received data samples. The current application supports only one node, for testing purposes, but a production application would be based on a service storing the data to a database on a Virtual Private Server (Cloud Architecture). The testing of the proposed system was carried out by implementing the functionality specified in Section III using an XMega Development kit. The developed WMSN node can be seen on Figure 7.
a) “G” Source code
Figure 7. Developed Wireless Multimedia Sensor Network node. Node components are depicted on the figure. The Embedded system is located on the back of the device thus it is not depicted on the figure.
b) LabVIEW Graphical User Interface Figure 6. Developed LabVIEW application on a server designed to display the power spectrum of the audio signal and to calculate maximum and average values. The source “G” code is also shown alongside with the GUI.
46
After establishing a functional node, WMSN and the LabVIEW Application the testing was carried out in two segments. In the first segment included a measurement of the environmental noise within a building complex throughout a short period of time. Te measurements conducted are shown on the LabVIEW application (Figure 8).
c) f = 3600 Hz
Figure 8. Testing of the proposed system by measuring indoor noise through a period of time. The measurement can be seen on the lower inset, whereas the upper inset shows the last measured power spectrum maximum.
From the measurement shown on Figure8 we can observe that the average and maximum values change through time in normal working environment. Important fact to observe is that the maximum values are much higher than average composed of lower amplitude levels. The amplitude levels shown on the graph are the result of the FFT conducted on the digitalised samples in range from 0 – 4096 related to voltage levels of 0 – 2.6V respectively. To verify the results of the power spectrum a second set of measurement was conducted composed of generating sine wave audio signals of different frequencies and analysing the signals in the power spectrum domain. The measurements are shown on Figure 9.
a) f = 1700 Hz
b) f = 2000 Hz
d) f = 5500 Hz
e) f = 8500 Hz Figure 9. Frequency testing of the proposed system. An audio sine wave signal of various frequencies and amplitudes is generated and the effect is observed on the FFT power spectrum.
From Figure 9 it is clear that the generated sine signals are adequate transformed into spectral domain, as the range of 0 – 256 on the power spectrum relates to the frequencies of 0 – 11 kHz. From this measurement it is safe to conclude that the processed FFT is indeed a power spectrum of the audio signal, thus reflecting the qualities of the original signal that is of outmost importance in environmental noise monitoring. On the other hand, one downside of the proposed system is the need to calibrate the analogue values in order to achieve accurate relation between sound pressure and given values in the FFT spectrum. This can be accomplished by using specialized microphones that are calibrated and a relation between voltage and sound pressure can be mathematically expressed. IV. Conclusion and future work This paper proposes a remote system for environmental noise monitoring based on WMSN is proposed. The proposed system’s 47
main advantages over the existing solutions is the possibility to install large number of WMSN nodes within a designated area, the distributed processing of the multimedia data within WMSN nodes and the low cost per unit of the WMSN node, as the hardware layout is very simplified. Further on, by using existing WSN nodes (XBee PRO S2B) the need to develop custom routing and data transmission protocols is avoided, as the WSN nodes are autonomous and develop the desired functionality on lower layers. With these advantages this system could be very easily integrated on a larger urban or rural area without the need to install additional infrastructure such as cables or data links. By separating large network into smaller segments controlled by network coordinators it is possible to reduce the possibility of congestion and to create more effective network. Also, by interconnecting all the network coordinators using an existing IP network (for instance the Internet network) with the main server it is possible to receive vast amounts of data for a large area without the need to additionally process noise data from every node on the server side. The testing of the proposed system was carried out using a single WMSN node and it was shown that the processing of the noise audio data can be effectively accomplished on the WMSN node, and the data can be easily transferred to the network coordinator. Also, a LabVIEW application was developed for the means of representing power spectrum data. Finally, by implementing sine wave tests it was concluded that the implemented FFT is accurate within the specified bandwidth so it can be used for power spectrum noise monitoring. Future work is aimed at the direction of calibrating the WMSN sensors so they can give accurate information regarding environmental noise. Also, an interconnection of all network coordinators towards the Internet network is planned using network coordinator gateway stations. In the end it is left to research the impact of large number of nodes in the network on the network congestion and the possible data loss within a network. 48
References
[1.] Mircea, M.; Kovacs, I.; Stoian, I.; Marichescu, A.; Tepes-Bobescu, A.; , „Strategic mapping of the ambient noise produced by road traffic, accordingly to european regulations,” Automation, Quality and Testing, Robotics, 2008. AQTR 2008. IEEE International Conference on , vol.3, no., pp.321326, 22-25 May 2008. [2.] “Inventory of Noise Mitigation Methods“, The European Commission Directorate-General: Environment, Working Group 5, July 2002. [3.] A. Can, L. Dekoninck, M. Rademaker, T. Van Renterghem, B. De Baets, D. Botteldoore, “Noise measurements as proxies for traffic parameters in monitoring networks”, Science of The Total Environment, Volumes 410–411, 1 December 2011, Pages 198–204. [4.] Piotr Mioduszewski, Jerzy A. Ejsmont, Jan Grabowski, Daniel Karpiński, Noise map validation by continuous noise monitoring, Applied Acoustics, Volume 72, Issue 8, July 2011, Pages 582-589, ISSN 0003-682X. [5.] Akyildiz, I.F.; Melodia, T.; Chowdhury, K.R.; , „Wireless Multimedia Sensor Networks: Applications and Testbeds,” Proceedings of the IEEE, vol.96, no.10, pp.1588-1605, Oct. 2008 [6.] Harjito, B.; Song Han; , „Wireless Multimedia Sensor Networks Applications and Security Challenges,” Broadband, Wireless Computing, Communication and Applications (BWCCA), 2010 International Conference on , vol., no., pp.842846, 4-6 Nov. 2010. [7.] Muhammad Fahmi Firdaus Bin Ismail, Leong Wai Yie, Acoustic Monitoring System Using Wireless Sensor Networks, Procedia Engineering, Volume 41, 2012, Pages 68-74, ISSN 18777058. [8.] Sen Bai; Zudian Qin; Xiang Li; Li Zhu; , „The Application and Research of Noise Data Acquisition with Wireless Network,” Environmental Science and Information Application Technology, 2009. ESIAT 2009. International Conference on, vol.3, no., pp.693-696, 4-5 July 2009.
[9.] Khaled A. Ali, Hussein T. Mouftah, Wireless personal area networks architecture and protocols for multimedia applications, Ad Hoc Networks, Volume 9, Issue 4, June 2011, Pages 675-686, ISSN 15708705, 10.1016/j.adhoc.2010.09.006. [10.] A. Can, L. Leclercq, J. Lelong, D. Botteldooren, Traffic noise spectrum analysis: Dynamic modeling vs. experimental observations, Applied Acoustics, Volume 71, Issue 8, August 2010, Pages 764-770, ISSN 0003-682X, 10.1016/j. apacoust.2010.04.002. [11.] Directive 2002/49/Ec Of Parliament And Of Of 25 June 2002
The The
European Council
[12.] Texas Instruments LM386 Low Voltage Audio Power Amplifier (Rev. A) , http://www.ti.com/lit/ ds/symlink/lm386.pdf [13.] Atmel ATxMega128A1 Datasheet [14.] Digi International XBee PRO S2B Wireless Modules [15.] National Instruments LabVIEW System Design Software, www.ni.com
49
MEMRISTORS IN THE ESD PROTECTION György ELMER, Ph.D. University of Pécs, Pollack Mihály Faculty of Engineering and Information Technology, Pécs, Hungary email: elmer@vili.pmmf.hu
Abstract: Memristors as fundamental two-terminal electric circuit elements can have application in the field of electrical engineering as well, e.g. in the fields of over-current protection, overvoltage protection and EMC. This paper proposes an application opportunity of memristors in ESD protection. Based on their purely dissipative character, these devices can usefully co-operate with ceramic capacitors having no ability to dissipate the energy of ESD surges. A type of structure of memristors for ESD protection is proposed. Keywords: memristor, ESD protection, ceramic capacitor, dissipation, SPICE model. I. Introduction Memristors belong to the newest development issues in the field of electronic engineering as they constitute the newest, namely the fourth fundamental electric circuit element. The name memristor is an abbreviation of “memory resistor”, since a memristor is a memory element in the electric circuit and is at the same time a dissipative element as well. Fig. 1 shows the symbol of memristors in electric circuits.
Fig. 1. Symbol of memristors in electric circuits
Memristor (MR) is a new basic, two-terminal circuit element that completes the missing link between charge q and flux linkage j, which was postulated by Chua in 1971 [1]. Characteristic is for an MR the relationship between q and j. M (q) =
dφ dq
is its memductance value in siemens (S) typically in case of a flux-controlled MR. Memory effect of memristors means that, the resistance value of a memristor depends on the electric charge q transported by the current i that have previously flown through it q=
t
t
∫ i(t )dt = q(t ) + ∫ i(t )dt 0
−∞
(4)
t0
Resistance value of an MR increases when current flows through it to one direction and decreases if the current flows to the opposite direction. Thus the MR device completes the two-terminal basic electric circuit elements operating on the basis of the relationships between the different electromagnetic variables. This complete graph of relationships is shown in Fig. 2.
(1)
in ohms (W). Thus a memristor is a device with a characteristic controlled by the charge or the flux, while df (q ) M (q) = M (2) dq is the memristance value in ohms (W) typically in case of a charge-controlled MR and W (φ ) =
dg M (φ ) dφ
Fig. 2. Relationships and circuit elements between electromagnetic variables
(3) 51
II. Characteristics of esd memristors An aspect determining the length of MR is the range of resistance between its ON and OFF states. In OFF state, i.e. under normal conditions without any ESD surge the resistance ROFF has to be most possible high to prevent unnecessary dissipation but low enough to enable the capacitor to charge and discharge. Thus in OFF state the MR has to behave like an electrostatically dissipative material. In the ON state of the MR, i.e. during an ESD surge attack the resistance RON is determined by the ESD current and the ability for dissipation. The above requirements can be contradicting. Proposed MR structures are shown in Fig. 3 showing a type of transition between the doped and undoped region of MR following a linear variation of doped and undoped lengths. Possible dimensions of an ESD protective MR are shown as well. Length dimension of this MR is greater than HP MR made for memory purposes by about three orders of magnitude, however the cross section has to be greater by 6 orders of magnitude
Actual realisations of the above MRs can show a doped region length variation depending on two dimensions e.g. in case of an MR with cubicle shape or on a single dimension r (radius) e.g. in case of a cylindric shaped MR. In both cases the complete device can be considered as a finite or infinite number of memristors connected parallel to each-other. Because of this connection topology an MR with the above structure can optimally handled with its memductance value, i.e. this device is considered as a flux-controlled MR. In case of a flux-controlled MR memductance value defines the dependence between i current and v voltage i (t ) = Gm (φ ) ⋅ v(t )
(1)
Generally the dependence of the Gm memductance value on the j flux of an MR with constant doped length and a characteristic shown in Fig. 5 can be considered in the form the Taylor series ∞
Gm (φ ) = g1 + ∑ k ⋅ g k ⋅ φ k −1 k =2
(2)
where gk values have different units as matter of course and g1 prevents potential conflicts of multiple sources [2].
Fig. 3. A proposed memristor structure for ESD protecting purposes
Fig. 4 shows a non-linear variation of transition between the doped and undoped region. These structures can possibly yield MRs with optimum resultant RON and ROFF values preserving meanwhile the desired operation speed.
Fig. 5. A general dependence of charge q on the flux j
Resultant memductance G of the proposed MR with area (A) dependent transition surface between doped and undoped region ∞
G (φ ) = ∫ g1 ( A)dA + ∑ k ⋅ ∫ g k ( A) ⋅ φ k −1 ( A)dA (3) A
k =2
A
since both g and j values depend on A. And if the surface dependence is treated as a finite number (n) of parallel MRs Fig. 4. Another proposed memristor structure for ESD protection
52
m
m
n
G (φ ) = ∑ g1l + ∑ ∑ k ⋅ g kl ⋅ φlk −1 l =1
i.e.
l =1 k = 2
m
n
G (φ ) = G1 + ∑ ∑ k ⋅ g kl ⋅ φlk −1 l =1 k = 2
(4)
(5)
III. Conclusions Apart from their applications now being developed in the field of electronics memristors as fundamental two-terminal, passive, electric circuit elements will surely have application in the field of electrical engineering as well like other electronic devices. This paper discussed an application opportunity of memristors in ESD protection. Based on their purely dissipative character MRs can usefully co-operate with ceramic capacitors having no ability to dissipate the energy of ESD surges. A memristor structures has been proposed in this paper with a transition between the doped and undoped region of MR following linear or nonlinear variation of doped and undoped lengths. These structures of “big” memristors for ESD protection purposes can possibly yield optimum resultant RON low enough for conducting the ESD surge current and ROFF values high enough to prevent the MR dissipating under normal conditions and preserving meanwhile the desired operation speed.
References
[1.] L. Chua, IEEE Trans. Circuit Theory 18, 507– 519, 1971. [2.] V. Biolkova, Z. Kolka, Z. Biolek, D. Biolek, Memristor modelling based on its constitutive relation, Proceedings of the European Conference of Systems, and of the European Conference of Circuits Technology and Devices, and of the European Conference of Computations, and of the European Conference on Computer Science, pp. 261-262, 2010.
53
POSSIBILE APPLICATIONS OF PROBLEM BASED LEARNING IN ENGINEERING IN TERTIARY EDUCATION Ildikó Horváth Pollack Mihály Faculty of Engineering and Information Technology, University of Pécs, Pécs, Hungary email: horvath.ildiko@pmmik.pte.hu
Abstract: Students’interest in the natural sciences has decreased for years in Hungary. Primarily, we can experience that the popularity of physics, chemistry and maths has decreased at different levels of education. As a result of this, fewer and fewer interested and talented students choose a career in natural sciences or engineering. Updating the teaching methods both in public education and tertiary education is necessary besides increasing the proportion of experiences connected to personal experiences, experiments and practice. In the article below, the possibilities provided by the problem based learning and teaching are presented. According to the opinion of the author, the application of the PBL method can significantly contribute to raise and maintain the interest in natural sciences and engineering. I. Introduction The demographic and social changes in Hungary, and the decrease in the students’ interest shown in natural sciences has resulted in the drastic decrease of student number in engineering in tertiary education. Universities have become suppliers, on the one hand, towards the students, on the other hand, towards the operators in economy and industry, who will be the future employers of graduate students. Employers expect students to be practical, creative, to realize and solve problems besides having professional knowledge. II. Problem based learning PBL is a teaching strategy that leads students to learn to learn and encourages students to develop critical thinking and problem solving skills that they can carry for life. [2] PBL is the search for solutions to life’s messy problems. It was developed by McMaster University Canada in medical and health sciences by the end of the 1960’s [3]. Problem-based learning (PBL) is an emerging teaching approach which has taken its prominence in tertiary education in recent
years[4]. PBL crosses a broad spectrum of instructional patterns, from total teacher control to more emphasis on self directed student inquiry. Patterns of power and control of decision making are affected by what calls “reculturing”. It is a shift from the traditional didactic teaching where the core knowledge discovery process lies almost entirely in the hands of the learner rather than the teacher. [5] articulated what has become one of the most widely used definitions of PBL. He termed it “authentic PBL” and argued that it has four key characteristics: Problem-based. It begins with the presentation of a real life (authentic) problem stated as it might be encountered by practitioners. Problem-solving. It supports the application of problem-solving skills required in “practice.” The role of the instructor is to facilitate the application and development of effective problem-solving processes.
55
Student-centred. Students assume responsibility for their own learning and faculty act as facilitators. Instructors must avoid making students dependent on them for what they should learn and know.
• Shape and size of the panel.
Self-directed learning. It develops research skills. Students need to learn how to get information when it is needed and will be current, as this is an essential skill for professional performance.
• Fix component positions.
Reflection. This should take place following the completion of problem work, preferably through group discussion, and is meant to enhance transfer of learning to new problems III. Research methodology We have formed two groups with 5 people in each in the framework of student study circles. Both groups were given the same PCB (printed circuit board) planning task. One of the groups was given the necessary knowledge in traditional education, while the other group solved the task according to the PBL method in group work, sorting out ideas by helping one another. The lecturer helped their work only as a tutor. To help students, I drew their attention to the following website: http://www.pinguino.cc/ The basic requirement to accept the task was to make a circuit documentation which contains the schematic circuit diagram, and the parameters concerning the components, plus a brief description how the circuit works, how high voltage on each wire can be measured, how much electricity goes through it, and what level and frequency range signs the specific conductor transmits.
• Positioning the components according to their geometrical size. • Adjusting to cooling fin. • Dismantling to modules. Students used ExpressSCH and ExpressPBC freely downloadable softwares to make the circuit diagram and the PCB (printed circuit board). IV. Discussion Students reported that PBL approach was more engaging and interesting as it allowed them to construct their own knowledge instead of absorbing teachers’ words and seek information to solve problems. Students also reported that they developed specific work skills such as, ability to research, produce syntheses, express ideas, communicate, and effectively work in teams to develop solutions to problems. Previous research has suggested that students positively evaluate problem-based learning approach. Furthermore, I am planning to test the efficiency of the PBL method on programming tasks, in modelling and simulation tasks. If the introduction of the method has a positive feedback there, it would be worth considering the introduction of PBL method in the framework of an updated curriculum. The curriculum with a new approach can also help motivate students and sustain their natural scientific and professional interest.
• Number of layers.
V. Summary of results The results from this study suggest that students gained more during the problem-based learning approach as compared to traditional lecture approach.
• Position and size of fixing points.
References
The following details can also be defined for planning.
• Position and type of plugs and connecting points. 56
[1.] Bizjak, G. (2008). Load flow network analysis with problem-based learning approach.
[2.] Fogarty R., (1998), “Problem Based Learning, a collection of articles”, Hawker Brownlow Australia.. [3.] Perrenet J.C., Bouhuijs P.A.J. Smits J.G.M.M. (2000), “The Suitability of Problem-based Learning for Engineering Education: theory and practice”, Teaching in Higher Education, Vol. 5, No. 3. [4.] Barell J (1998), “ Problem Based Learning, an inquiry approach”, Hawker Brownlow Education, Australia. [5.] Barrows, H. S., (1998), “The essentials of problem-based learning”, Journal of Dental Education,62 (9), pp. 630-633. [6.] Csíkos Cs. (2003): Egy hazai matematikai felmérés eredményei nemzetközi összehasonlításban. Iskolakultúra, 8. 20–27. [7.] Molnár Gy. (2004): Problémamegoldás és probléma alapú tanítás. Iskolakultúra 2004/2 12-19.
57
SOLUTION DIVERSITY FOR A SPECIFIED PROJECT INMECHATRONICS Igor Fuerstner1, Laslo Gogolak2, Szilveszter Pletl3 1. Department of Mechanical Engineering, Subotica Tech Subotica, Serbia email: ifurst@vts.su.ac.rs 2. Department of Automation Subotica Tech, Subotica, Serbia email: gogolak@vts.su.ac.rs 3. Department of Automation Subotica Tech, Subotica, Serbia email: pletl@vts.su.ac.rs
Abstract: Product development process allows different solutions for specified projects. The nearly infinite number of solutions is usually limited by constraints as defined by both the purchaser and the supplier of the final product. The paper presents the results of a seminar that was prepared by third-year students of mechatronics at Subotica Tech. The paper includes the detailed description of the seminar task. Different solution concepts will be explored with special focus on the various final solutions. A brief comparison of the different solutions will also be given. Keywords: Mechatronics, Product Development, Solution Diversity I. Introduction Mechatronics is one of the most popular technical disciplines in the 21st century. It is a multidiscipline science which consists of three main disciplines: electrical engineering, mechanical engineering and information technology [1]. The basic idea of creating this discipline was to satisfy the industrial requirements, since the industry is in need of engineers who have competence in all of these three disciplines. In mechatronics all of the important and fundamental contents from the above mentioned disciplines are summarized (Fig 1 ).
The aim is to achieve practical knowledge which is used in engineering. The combined knowledge in mechatronics allows engineers to think from another perspective. Engineers of mechatronics have different technical intelligence than other colleagues. They can use and combine knowledge from the field of mechanics, electronics, robotics, programming, control systems, CAD/CAM systems, manufacturing and so on. This competence is mainly applied in the field of manufacturing system design and system integration, from small and simple systems to complex manufacturing-assembly systems [2].
Figure 1. Mechatronics
The first level and basic task in mechatronics is designing and modeling systems. According to the actual trend, modeling should be done in 3D drawing software. Standard mechanical elements must be known to design the model of the systems. Accordingly, this level involves knowledge from the field of mechanical engineering. Furthermore, all systems, from the smallest to the biggest ones, contain sensors and actuators as their components. Knowledge from the field of mechanical and electrical engineering is necessary in this next level of mechatronics engineering, for choosing and using these components. Finally, 59
the highest level of mechatronics is applying control systems, which involves programming and competence in information technology. Mechatronics engineers have an insight on each and every element of the system. They are capable of imaging, designing, integrating and implementing the system from the beginning to the end. At Subotica Tech – College of Applied Sciences, Engineers of mechatronics are educated, to fulfill the need of the industry for this profile of engineers. Third-year students have to present their competences by preparing a seminar paper, which demonstrates that they mastered the knowledge and skills needed to develop one product in the field of mechatronics to the level of complete technical documentation. This paper presents the results of a seminar that was prepared by the students. The paper is structured as follows. First, the detailed description of the seminar task is given. Following that, the different developed solutions with a brief comparison are presented. Finally, a discussion of the results and conclusions is presented. II. Seminar task A mechatronics device that is able to sort four different cylindrical items (Error! Reference source not found.) should be developed and designed.
buttons. After the sorting cycle, i.e., when the sorted cylinder leaves the conveyer, the device has to be set to the initial position and stopped; The sorting has to be performed in the “B” area. The “C” area is reserved for the fast moving of the sorted cylinder; The sorting of the cylinders has to be designed using mechanical and electro-pneumatic solutions; The control of the device is performed by PLC (OMRON CJ1G). The choice of sensors is free. For the control of the electric motor a DANFOSS inverter drive has to be used; All moving parts have to be guided by bearings; Regular->safety->measures->have->to-> be->taken->into consideration; A “Stop” button and a “Reset” button have to be provided; The device has to be designed as an independent module; A user manual should be provided. TABLE I. CYLINDRICAL PROPERTIES Cylinder Dimaeter type (mm)
1 2 3 4
50 50 50 50
ITEMS’
Hight (mm)
Material
Density (kg/m3)
30 50 30 50
Steel Steel
7850 7850 900 900
Polypropylene Polypropylene
The device has the following constraints: The sorting is performed on an existing conveyer, which is shown in Fig 4. It occupies a predefined area (Fig 2). It is forbidden to make any changes on the conveyer. The conveyer is equipped with an electric motor with speed reducer (Nord, Type: SK 1S40AF-71 L/4, i=9.25, n=151min-1) that is to be used for moving the belt; The “A” area is reserved for the placement of the cylinder (one cylinder is to be placed at a time by hand, without checking if more than one cylinder is placed at the conveyer). The solution has to secure that during the sorting period no other cylinder could be placed in the “A” area. After the placement of the cylinder, the sorting cycle has to be started with two “Start” push 60
Figure 2. Existing conveyer’s predefined area
III. Solutions Based on the seminar task, third-year students of mechatronics at Subotica Tech prepared their seminars. In this paper, six different solutions are presented, with data regarding the number of used cylinders, the number of used sensors (optical, inductive and “Reed”), the number of linear motions and the number of rotational motions.
Third solution The third solution (Fig 5), includes 3 cylinders, 1 inductive sensor, 2 optical sensors, 6 “Reed” sensors. The solution uses 3 linear motions plus the motion of the conveyer, and no rotational motions [5].
First solution The first solution (Fig 3), includes 2 cylinders, 1 inductive sensor, 3 optical sensors, 6 “Reed” sensors. The solution uses 2 linear motions plus the motion of the conveyer, and no rotational motions [3].
Figure 5. Third solution [5]
Fourth solution The fourth solution (Fig 6), includes 4 cylinders, 1 inductive sensor, 3 optical sensors, 8 “Reed” sensors. The solution uses 4 linear motions, 2 rotational motions and the motion of the conveyer [6]. Figure 3. First solution [3]
Second solution The second solution (Fig 4), includes 2 cylinders, 1 inductive sensor, 2 optical sensors, 4 “Reed” sensors. The solution uses 2 linear motions plus the motion of the conveyer, and no rotations [4].
Figure 6. Fourth solution [6]
Figure 4. Second solution [4]
Fifth solution The fifth solution (Fig 7), includes 4 cylinders, 1 inductive sensor, 3 optical sensors, 8 “Reed” sensors. The solution uses 4 linear motions, the motion of the conveyer, and no rotational motions [7].
61
TABLE II. COMPARISON BETWEEN DIFFERENT SOLUTIONS
Figure 7. Fifth solution [7]
Sixth solution The sixth solution Fig 8. includes 3 cylinders, 1 inductive sensor, 3 optical sensors, 6 “Reed� sensors. The solution uses 3 linear motions, the motion of the conveyer, and no rotational motions [8].
Solution number
Number of cylinders
Total number of sensors
Number of linear motions
Number of rotational motions
1 2 3 4 5 6
2 2 3 4 4 3
10 7 9 12 12 10
2 2 3 4 4 3
0 0 0 2 0 0
The presented solutions have similarities because the seminar task introduced many constraints and limitations, but it can be concluded that in the field of mechatronics, engineers may indeed show a high level of creativity in solving problems, i.e., in designing products.
References
[1.] Alciatore, D.G., Histand, M.B.: Introduction to Measurement and Measurement Systems, McGraw-Hill, New York, 2012. [2.] Bolton, W.: Mechatronics, Pearson Education Limited, Harlow, 2003. [3.] Olajos, K., et al.: Seminar task, Subotica Tech, Subotica, 2012.
Figure 8. Sixth solution [8]
IV. Discusion
and
[4.] Bata, Z., et al.: Seminar task, Subotica Tech, Subotica, 2012.
The seminar task to develop a mechatronics device that is able to sort four different cylindrical items, yielded six not so different conceptual solutions (Error! Reference source not found.), but completely different solutions (Fig 3-Fig 8). The solutions were compared regarding the number of used cylinders, the number of used sensors, the number of linear motions and the number of rotational motions (Error! Reference source not found.). The only intention for doing this kind of comparison between the solutions was, to show that the same task can be solved in different ways.
[5.] Lacko, L., et al.: Seminar task, Subotica Tech, Subotica, 2012.
of
results
conclusions
62
[6.] Sarandi, H., et al.: Seminar task, Subotica Tech, Subotica, 2012. [7.] Engi, A., et al.: Seminar task, Subotica Tech, Subotica, 2012. [8.] Tumbas, L.S., et al.: Seminar task, Subotica Tech, Subotica, 2012.
ENERGETIC EFFICIENCY AND THE RENEWABLE ENERGY SOURCES IN THE SLAVONIA REGION Milan Ivanović, Hrvoje Glavaš, Dubravka Špiranović-Kanižaj Faculty of Electrical Engineering in Osijek, Croatia
Summary
The paper points out the legal basis for the implementation of energy efficiency in Croatia considering the results of the analysis of energy efficiency in the Osijek-Baranja County by the basic sectors of energy consumption. It also points out an insufficient use of local and regional potential of renewable energy sources. The conclusion emphasizes the need to introduce the regional concept in the energy policy, particularly in the use of renewable energy sources. I. Introduction The EU has taken an energetic development strategy in order to increase the quality and security of energy supply, increase the economic competitiveness and decrease the impacts on the climate changes. In order to provide new local energy sources and reduce greenhouse gases emissions, the EU decided to use more of renewable energy sources (RES) and to improve energy efficiency, especially in the buildings sector. The main objectives of the energy policy of the EU by the year 2020 are as follows: reduce greenhouse gas emission by 20%, produce 20 % of energy from renewable energy sources, save 20% of energy and to consume 10% of total motor fuels consumption on bio-fuels. [21] Croatia has accepted the goals of the EU and its energy policy is focused on compliance with EU directives and on the creation of the energy market in Croatia, as a preparation for integration into the EU energy market. Therefore, in recent years Croatia has developed an Energy Development Strategy, lots of laws and regulations about energy, two National Energy Efficiency Programs (2009_12 and 2013_16) and has established the Energy Efficiency Fund. [21] Croatia consumes 128% more primary energy per unit of GDP than the average EU-27 (Fig. 1). It is also very important to accentuate that Croatia imports more than 40% of energy (oil, gas, electricity). Therefore, the country should pay special attention to the energy efficiency and the use of renewable energy sources. [13] [14] [18]
Figure 1: Energy intensity in selected EU countries and Croatia in 2001 and 2006 (kgoe/1000 € GDP) Source: [5]
II. Energy consumption and energy
efficiency in the osijek-baranja county
Faculty of Electrical Engineering in Osijek (FEEOS) in early 2011 has started the energy efficiency research project in the Osijek-Baranja County (OBC). Based on these researches the research team FEEOS has later made a study „The program of efficient use of energy in the final energy consumption in OBC for the period from 2012 to 2014 - with reference to the year 2016” [18] On this occasion, there are shown only the basic results of the analysis of energy consumption and energy efficiency in the OBC by the basic sectors of energy consumption.
63
The main sources of energy in the final energy consumption in the OBC are: natural gas (26.3%), diesel fuel (24.6%), electricity (21.2%), motor gasoline (12.6%) and heat from the centralized heating system (7%). In the period from 2007 to 2010 there was an increase in total energy consumption from 11.123 PJ to 11.532 PJ at an average annual rate of growth (ARG) of 1.2%. Total consumption of energy in the OBC in the period from 2007 to 2010, by sector, is shown in the Tab. 1 and Fig. 3.
Figure 2: Structure of final energy consumption in OBC (2010) - by sectors (%) Source: [19]
Highest energy consumption for the year 2010 in the OBC had the buildings sector (Bs) 60.7%, the second was the transport sector (Tr) 38.5%, and the public lighting (PL) had 0.8% (a negligible amount of energy in the total energy consumption). (Fig. 2) [18] These facts are respected when defining objectives and measures for energy efficiency realization.
Figure 3: Total final energy consumption in OBC (GWh) Source: [19]
Table 1 Final energy consumption in the OBC from 2007 to 2010 (GWh) No 1 2 3 4
Sector Public lighting Transport Building Total
2007 22,4 1,259 1,808 3,089
2008 23,9 1,299 1,887 3,210
2009 24,9 1,277 1,885 3,187
2010 25,2 1,244 1,934 3,203
ARG (%) 3,9 - 0,4 2,3 1,2
Source: [19] In the energy consumption of the buildings sector in the OBC the biggest representatives are natural gas (NG) - 43%, electricity (El) 34% and the heat from the (CHS) - 12% are used the most they are followed by: firewood (Fw), heating oil (HO), liquefied petroleum gas (LPG), field cobs energy (FC), geothermal energy (Geo) and coal (C) [18] [19] [20]; Fig. 4
Figure 4: Structure of final energy consumption in OBC buildings’ sector (2010) - by energy kinds (MWh) Source: [19]
64
From the education subsector in OBC there are 87 kindergartens, 184 primary schools, 59 secondary schools, 18 faculties and university departments from education subsector in the OBC included in final energy consumption (dates of 2007). The main energy sources used in this subsector are: natural gas (30%), heat from CHS (29%), electricity (22%) and heating oil (19%); Fig. 5
On Fig. 7 can be seen structure of energy consumption in AHHII of households sector in OBC in 2010 by energy kinds (%). This structure of meeting the needs for the heat in cities has many advantages, but there are also disadvantages. Critical moments occur in a situation of power reduction, when a large part of households in urban settlements are in a very unfavourable position. In crisis situations, people use electricity as an alternative form of the residential buildings warming which threatens stability of power system and reduces energy efficiency. Analysis of electricity consumption in buildings sector shows a great increase of consumption in households; reason is the increase in household equipment, modern appliances and devices. [19]
Figure 5: Structure of energy consumption in OBC buildings - subsector education (2010) - by energy kinds (MWh) Source: [19]
Household sector in the OBC is the biggest consumer in the total final energy consumption in buildings sector (74%), the largest consumer of natural gas (88%), electricity (63%) heat from CHS (50%), heating oil (41%), LPG (54%), firewood (92%), field cobs (100%) and coal (100%). In the structure of energy consumption in the household sector: natural gas (52%), electricity (31%) and heat from the CHS (8.2%) are used the most. Most of the households in the cities (79%) are connected to the gas or CHS. Most of the households in villages (71%) have autonomous home heating installations (AHHI); Fig. 6
Figure 7: Structure of energy consumption in AHHII of OBC households (2010) - by energy kinds (%) Source: [19]
Global conditional efficiency of energy use in the final consumption in the OBC (measured by ratio of generated gross domestic product and energy consumed in buildings sector, traffic and public lighting) has a trend growth. In 2005 there was spent 1.52 kWh of energy per 100 € of GDP. In 2007 this ratio improved: there was spent 1.22 kWh of energy per 100 € of GDP – Fig. 8.
Figure 6: Structure of energy consumption in OBC household sector (2010)- by energy kinds (MWh) Source: [19]
65
III. Renewable
energy
sources
in
croatia
Figure 8: GDP and final energy consumption - buildings sector, transport and public lighting –in OBC (mil. €; GWh) Source: [19]
Analysis of energy efficiency in the OBC showed that only a few public properties and facilities for housing have an energy certificate that means there are not many objects that meet the requirements of energy efficiency. Energy prices and billing system for energy from CHS consumption have not been incentive for the rational use of energy in buildings sector - the largest sector of energy consumption with an unfavourable structure in which natural gas and petroleum products dominate. In the coming period it is expected energy prices to increase even more, which will raise the cost of public spending, such as the decrease in standard of living. This point to the urgent need for the implementation of energy efficiency measures and the use of domestic renewable energy sources available.
Croatia has a very good potential of renewable energy sources (biomass, geothermal, solar and wind resources) and with the Strategy [14] country opted for the use of renewable energy sources (RES) in accordance with the principles of sustainable development. The share of RES in gross final energy consumption in 2020 will be 20% and that will be achieved by completing the following sector goals for the year 2020: [14] - 35% of total electricity consumption should be get from renewable energy sources (including large hydropower’s); - 10% of energy used in all forms of transport (energy from gasoline, diesel fuel) should be get from RES (bio fuel); - 20% of gross final energy consumption for heating and cooling should be get from RES. Particularly significant is a potential of biomass in Croatia - biomass from wood, agricultural biomass and potential waste of biological origin for the energy production. Strategy has set the goal: the use of about 26 PJ of energy from biomass in year 2020 (in 2010. used about 15 PJ). Part of this biomass will be used in a number of biomass power plants with a total power of approximately 85 MW in 2020 year. In order to increase energy efficiency, advantage will have plants with production of electricity and heat in the combined process.
Table 2 Power plants per renewable energy sources in Croatia In function (2012.) Electrical Plants capacity No Plant category (No) [MW] 1 Solar 20 0.4 2 Hydro 2 0.3 3 Wind 9 152.7 4 Biomass 2 5.7 5 Biogas 3 4.1 6 Cogeneration 3 10.5 7 Geothermal 8 Landfill gas 9 Total 39 173.7 Source:[22] 66
Planned (2020.) Plants Electrical (No) capacity [MW] 377 62 103 91 54 6 1 2 696
87.7 127.7 4,543.0 228.5 80.9 36.1 4.7 1.6 5,110.0
Planning and investment in the construction of biomass power plants is not within the jurisdiction of government institutions and public companies, it is left to the private investors who can use the benefits of privileged electricity producer with privileged rates for electricity delivered. Privileged price for delivered kWh of energy from RES was attractive to investors, but in the mid of 2012., tariff system has been modified and preferential price of kWh for photovoltaic power plants is reduced by additional restrictions (or encouragements) in order to strengthen the energy efficiency and in order to support a domestic industry, which will be reflected on the so-called „accidental” solar power plant investors. However, that’s why the purchase prices of electricity from biomass have increased (for the power plants from 1.2 kn to 1.3 kn for delivered kWh, and for the large plants from 0.83 kn to 0.9 kn) - that will increase the interest in biomass power plant investment. [23] [24] Today, in 2012, in Croatia there are 39 RES power plants with the total electric power of 173.7 MW. The plan is (there is interest registered) to construct 696 projects with a total of 5110 MW of electric power and additional heat power of 88 MW (Tab. 2). When it comes to biomass, volume and the structure of registered solid biomass production (bio fuels) should be stressed (Table 3). In 2010 in Croatia there are pellets produced in nine plants with a capacity of 205,000 t a year, that is just a quarter of use. Over 95% of production is sold to foreign markets. Briquette production is estimated at about 60 000 t per year depending of the raw material available (waste from wood processing). Briquettes are also, in large part, sold to foreign markets. Charcoal is produced industrially only in Belišće (Slavonia), which is more than half of the annual production. The rest is produced by dozens of small and medium producers of charcoal. [4]
Table 3 Solid bio fuel production in Croatia in 2010. No Type of bio fuel 1 Wood pellets 2 Wood briquettes 3 Charcoal* 4 Wood chops 5 Firewood Source: [4]
UM t t t t m3
Amount 62 372 10 227 4 319 76 410 1 761 000
Regional potential of renewable energy sources In Slavonia and Baranja there are functioning 7 plants per renewable energy sources with the total power of 12.9 MW. The plan is to build 138 projects with a total electrical power of 117.73 MW and additionally 88 MW of thermal power; Tab.4 Table 4 Power plants per renewable energy sources in Slavonia and Baranja In function (2012.) Planned (2020.) Electrical Electrical Plants Plants capacity (No) capacity [MW] [MW] (No)
No
Plant category
1
Solar
2
0.02
2
Hydro
0
0
7
3
Wind
0
0
2 104.0,00
4
Biomass
1
2.4
29
86.10
5
Biogas
2
2.1
35
50.50
6
Cogener
2
8.4
7
Geotherm
0
0
8
Landfill gas
0
0
9
Total
7
12.92
62
1.73 0.96
3 12.0,00 0
0
0
0
138 117.73
Source: [22] The remains of crops and fruit production in Slavonia and Baranja As can be seen from the previous tables there are no balance or any concrete data about energy evaluation of crop and fruit growing residues in Croatia [4] [14] . There are also no such information in a series of studies: „The potential of renewable energy sources in all 20 counties in Croatia“- which EIHP Zagreb has made. [8] [9] 67
[10] [11] [12] Slavonia and Baranja has extremely favourable biomass potential in forestry [6] [7] and in this category of renewable energy sources. [15] [16] [17] Figure 9: Solid biomass from crop and fruit growing residues and vineyards cutting in Slavonia and Baranja
crop and fruit growing residues in Slavonia and Baranja including the straw of: wheat, barley, rye, oats and soybeans, as also the corn cobs, stalks of sunflower, oilseed rape, tobacco and residues from fruit and vineyards cutting (Tab.4 and Fig. 9). The lower calorific value of certain types of biomass, area harvested and yield per individual cultures (based on the average of the last five years), the standard ratio of yield (seed) and crop (farming) residue (stalks and corn cobs), are all taken into account. [1]
(103 toe)
In this research, there is made an evaluation of the energy potential of solid biomass from Table 5 Energy potential of crop residues and fruit production in Slavonia and Baranja Lower Culture
Area
cal.val. harvested [MJ/kg] ha Wheat 14 138,962 Barley 14.2 38,022 Rye 14 563 Oats 14.5 16,383 Corn 13.5 199,421 Corn cob 14.7 (“) Soybeans 15.7 47,078 Sunflower 14.5 28,482 Sunfl.head 17.55 (“) Oilseed rape 17.4 16,591 Tobacco 13.85 5,561 Fruit 14.15 19,955 Vineyards 14 9,021 Total 520,038 Source: counted from [1] [2] [3]
68
Average Mass
Seed/ Total of
Energy
Eq.
yield t/ha 5.1 4.2 2.7 2.5 6.6 6.6 2.5 2.8 2.8 2.7 2.1 3.4 3.0
res
value GJ 9851,498 2255,019 25,084 604,475 17882,160 3894,337 3730,475 2310,998 419,566 1558,554 56,040 959,733 381,399 43929,340
heat. oil t 245,123 56,109 624 15,040 444,941 96,898 92,821 57,502 10,440 38,780 1,394 23,880 9,490 1093.042
of seed t 703,678 158,804 1,493 41,688 1324,604 1324,604 118,805 79,690 79,690 44,786 11,561 67,826 27,243
biomass t 1 703,678 1 158,804 1.2 1,792 1 41,688 1 1324,604 0.2 264,921 2 237,610 2 159,379 0.3 23,907 2 89,572 0.35 4,046 0.325 67,826 0.457 27,243 3105,070
The energy potential of crop residues and fruit production in Slavonia and Baranja is 1 093 042 toe which is more than important for a country that is dependent on energy imports of about 40% of final energy consumption. Here it should be pointed out that around 25% of cobs are plowed again in the land for planting and that about 15% of straw and corn is used in livestock rising. If we take into account that 10% of the biomass is used for some other purposes, it remains about 500 000 toe.
Potential and use of renewable energy sources 1. Croatia has a significant energy potential of renewable energy sources (biomass, geothermal, solar and wind resources), and the Slavonia and Baranja region has a potential in many types of biomass and geothermal energy.
IV. Conclusion
3. Because of the economic crisis in the country and modification of the tariff system (2012) on the privileged renewable sources energy producers can be counted with a reduced interest to build planned power plants in the next period.
Energy policy in Croatia 1. Croatia has accepted the goals of the EU and its energy policy is focused on compliance with EU directives and on the creation of the energy market in Croatia, as a preparation for integration into the EU energy market. Going toward set goal, in recent years Croatia has developed an Energy Development Strategy, lots of laws and regulations about energy, two National Energy Efficiency Programs (2009_12 and 2013_16) and has established the Energy Efficiency Fund. 2. Since Croatia has significant energy potential of renewable energy sources (biomass, geothermal, solar and wind resources) there are a series of measures at national level in order to strengthen energy efficiency and to use renewable energy sources. 3. Planning and investment in the construction of renewable energy sources power plants is left to the market – to the private investors (equity) that can use benefits of the status of privileged electricity producer. Energy consumption and energy efficiency in OBC 1. Final energy consumption in the OBC is excessive in relation to the GDP. Structure of energy sources consumption is unfavorable; there is a significant dependence on imported fossil fuels and electricity.
2. RES capacities (power plants) that are constructed so far (their number and electric power) in Croatia (including Slavonia and Baranja region) have used only a small part of energy potential.
4. A pure market approach to energy development can not be effective - especially in transition countries such as the Croatia. 5. In economy and energy terms it is unreasonable for the country not to use its own renewable energy resources and at the same time to import fossil fuels and electricity for the final consumption. 6. Slavonia and Baranja region has significant potential of renewable energy sources in biomass and geothermal energy that are not used and there are no indications of increased use. 7. With this research the potential of biomass residues from farming and fruit production in Slavonia and Baranja has been determined. On this basis, region has at least 500 000 toe. 8. In order to start using this energy potential, there is an establishment of regional energy policies required- energy policies that will evaluate the potential of renewable energy sources and bring them to use far more efficient and faster than the current national model that is based solely on market laws.
2. The greatest opportunities to increase energy efficiency are in the buildings sector - where is possible to use local potential of renewable energy sources. 69
References
[1.] Brkić, M, ;Janić, T. (2008): Briketiranje i peletiranje biomase, Contemporary Agricultural Engineering, ISSN 0350-2953, Vol. 34: 1-2 pp 78-86 [2.] DSZ (2012): Poljoprivredna proizvodnja u 2010; ISSN 1332-0297 Statistical Reports, 1428/2011, Zagreb [3.] DSZ (2012): Poljoprivredna proizvodnja u 2011; ISSN 1332-0297 Statistical Reports, 1457/2012. Zagreb [4.] EIHP (2010): Energija u Hrvatskoj 2009.godišnji energetski pregled, Ministarstvo gospodarstva RH, Zagreb, 2010. [5.] EC (2008): Energy Yearly statistics 2006, Brisel [6.] Glavaš, H. (2010): Modeliranje GIS-om opisanog energetskog potencijala biomase; doktorska disertacija, Elektrotehnički fakultet, Osijek [7.] Glavaš, H.; Ivanović, M. Blažević, D. (2012): Program of Efficient Use of Energy In Final Energy Consumption on the Area of Eastern Croatia, 1st International Scientific Simposium „Economy of Eastern Croatia“, EFOS, Osijek [8.] Grupa autora, Jakšić. D. (ur.) (2012): Potencijal obnovljivih izvora energijé XIV. Osječkobaranjska županija, ISBN 978-953-6474-73-8; EIHP, Zagreb [9.] Grupa autora, Jakšić, D. (ur.) (2012): Potencijal obnovljivih izvora energije X. Virovitičkopodravska županija, ISBN 978-953-6474-691; EIHP, Zagreb [10.] Grupa autora, Jakšić D. (ur.) (2012): Potencijal obnovljivih izvora energije XI. Požeško-slavonska županija, ISBN 978-953-6474-70-7; EIHP, Zagreb [11.] Grupa autora, Jakšić D. (ur.) (2012): Potencijal obnovljivih izvora energije XIII. Brodskoposavska županija, ISBN 978-953-6474-71-4; EIHP, Zagreb [12.] Grupa autora, Jakšić D. (ur.) (2012): Potencijal obnovljivih izvora energije XVI. Vukovarskosrijemska županija, ISBN 978-953-6474-75-2; EIHP, Zagreb [13.] HROTE (2012.) Tarifni sustav za proizvodnju električne energije iz obnovljivih izvora energije i kogeneracije; http://www.hrote.hr/hrote/
70
obnovljivi/poticaj.aspx [14.] Hrvatski Sabor (2008): Zakon o učinkovitom korištenju energije u neposrednoj potrošnji, NN,152/08 [15.] Hrvatski Sabor (2009): Strategija energetskog razvitka Republike Hrvatske, NN 130/09 [16.] Ivanović M. (2006): Znanost i regionalna energetika - Istraživanja o razvoju energetike i korištenju energije u Slavoniji. ISBN 953-6032502-3; Elektrotehnički fakultet Osijek [17.] Ivanović, M. (2007): Europski trendovi u obnovljivim izvorima energije; II. skup „Obnovljivi izvori energije u RH“, HGK, Zagreb, Zbornik (str. 237 - 247) [18.] Ivanović, M. (2007): Renewable Energy Sources in Eastern Croatia - Potentials and the Use, EU Inteligent Energy, European Forum on RES; Cavtat, Proceedings, pp 475-486; [19.] Ivanović, M.; Glavaš H.; Blažević, D. (2011): Program učinkovitog korištenja energije u neposrednoj potrošnji na području Osječko baranjske županije za razdoblje 2012. - 2014. g. s osvrtom na 2016. g., Elektrotehnički fakultet Osijek [20.] Ivanović, M.; Tonković, Z.; Glavaš, H. (2011): Energy Efficiency of Natural Gas Usage in Household of Osijek-Baranja County; 3rd International Natural Gas, Heat and Water Conference, Osijek, 20.-30. September, Proceedings [21.] Ivanović, M.; Glavaš, H.; Tonković, Z. (2012): Energy Efficiency of Natural Gas Usage in Industry of Slavonia_Baranja regions; 3rd International Natural Gas, Heat and Water Conference, Osijek, 26.-28. September, Proceedings [22.] Ministarstvo gospodarstva RH (2010): Nacionalni program energetske učinkovitosti od 2008. do 2016. g. [23.] Ministarstvo gospodarstva RH (2012): OIE pregled – Interaktivna karta [24.] http://oie-aplikacije.mingo.hr/InteraktivnaKarta/ [25.] Špiranović.K. D.; Krajnović, V.; Barić, T. (2008): Osvrt na tehnologije koje mijenjaju svijet elektrotehnike; 17. međunarodni simpozij Planiranje i projektiranje, Zg, Zbornik (pp 3442)
DISTURBANCES EMITTED TO THE ENVIRONMENT BY ELEVATOR DRIVE APPLICATIONS Zoltán Kvasznicza PhD Pollack Mihály Faculty of Engineering and Information Technology, University of Pécs, Pécs, Hungary
email: kvasznicza@pmmik.pte.hu
I. Introduction to the topic Due to advances in technology the elevator industry is capable of producing and operating high-speed, high capacity elevators providing a comfortable ride-experience. All this has been made possible by the advent of power electronics such as the emergence of technologically advanced semiconductor elements which, as a result of their decreasing costs, have seen widespread use. However, despite all their advantages, these new advances have also been the source of hitherto unconsidered problems. The alternating current choppers and frequency converters utilised in elevator drive systems act as high-power, so-called non-linear loads causing disturbances in the electric supply system. At the same time, the control systems of modern elevators are typically low-wattage and are quite sensitive to disturbances on the network. Changes in elevator technology are a part of the considerable changes in building technologies. The following are affected: energetics, lighting technology, control and measurement systems, safety systems, communication technologies and environment protection. The use of information technology and electronics has become standard in these areas. Intelligent buildings are now in existence where high levels of energy are required for operation but low energy levels suffice for the control and management of systems. The same is true of modern-day elevator control and monitoring systems. These systems are sensitive to network interference; however, they themselves are quite often the source of such disturbances. The following elevator systems are installed in Hungary: • Elevators with a dual-speed, asynchronous machine-based drive system
variable-frequency, asynchronous machinebased drive systems • Frequency converter supplied asynchronous machine-based drive systems • Modern, frequency converter supplied, variable-speed drives have replaced other solutions in almost all areas of industrial application. They are replacing the previously ubiquitous motor-generator-based drives which were operated on the controlled “phase-splitting” principle. Their rise is due to their efficient power transmission properties. The supply of asynchronous motors through frequency converters (inverters) ensures continuous and lossless governance of their revolutions by means of the simultaneous alteration of the supply voltage and frequency [6] [8] [16]. Inverters applied in the elevator industry are composed of IGBTs which are controlled by PWM [11]; their control is field-oriented [10], [12], [14], [16]. However, these favourable drive characteristics achieved through the use of modern technological solutions come at a price. Frequency converter supplied drives pollute the electric network to a greater extent than those operating on the phase-split principle. II. Objective The goal of my research is a metrological examination of electrical disturbances caused by elevator systems., There is a relatively new concept: Electrosmog, a part of which is the electromagnetic field [7], generated by elevators used in office and residential buildings. The concept and its effect on the human body enjoy widespread interest and should, therefore, be worthy of scientific study. This research aims to cover the gaps outlined above.
• Elevators with AC chopper supplied, 71
The equipment such as network analysers suitable for accurate measurements, and the elevator systems of various principles which formed the technological background for the research were provided at the Institute of Information Technology and Electrical Engineering of the Pollack Mihály Faculty of Information Technology and Engineering at the University of Pécs and by its industrial partners. III. Research methodology The nature of this topic called for a deductive research strategy. This served as a framework for the examination of conducted and radiated disturbances in the supply network emitted by AC chopper and frequency converter fed asynchronous motor drives for elevators which were examined in an unloaded state while travelling upwards (EU – Empty cabin Upwards) and downwards (ED – Empty cabin Downwards) respectively. When selecting the right machinery to be involved in the research, I took care in the interest of comparability that the technical parameters (payload, weight, lifting height, speed) of the elevators should not differ significantly. The ESM-100 measuring device manufactured by Maschek was used for the measurement of radiated disturbances. This device utilises a patented method which allows the isotropic measurement of electric and magnetic fields simultaneously. Two consecutive measurement cycles were performed in the case of all tested elevators. • The first measurement was taken while the empty cabin was moving downwards at reduced inspection speed in order to determine the spatial location of maximum radiated emissions. I recorded and measured the E electric field strength and B magnetic flux density values on an elevation plane 0.5 meters above the drive shaft with the measurements arranged in a matrix at intervals of 0,25 meters. The instrument determined the electric and magnetic field vector components in all three spatial directions (X, Y, Z) - in 5 Hz - 400 kHz frequency range - and calculated 72
the resulting vector in 3D. The X axis was parallel to the drive shaft while the Y axis was perpendicular to it. Using the Graph ESH-100 application I determined the point of maximum value on the basis of the three dimensional measurements. • During the second part of the radiated disturbance measurement process I measured the changes in electric field strength and magnetic flux density over time in the locations described above. The elevator was in the same load state as during testing for conducted disturbances transmitted along electrical lines: That is to say, I examined the full path described by an empty cabin travelling downwards (ED) and upwards (EU) respectively. The curves yielded provide information on the effects exerted by the drive motor during the period of starting, at constant revolutions and when braking. IV. Measurement results Recorded data of radiated disturbances by asynchronous motor-based variable speed drives show widespread variation, but it was concluded through the 3D measurements that maximum flux density was invariably located over the axis of each drive. Values decrease with increasing distance down to environmental values occurring 50-70 cm from the point of maximum value. The three-dimensional representation of the data set shows components such as the brake magnet or the transformer for the controls as having high radiation. It wasn’t considered practical to compare magnetic flux density values measured in the various elevators since these machines were manufactured by different vendors (LOHER, ASTOR, LANCOR), and even when the manufacturer was the same, as is the case with the LOHER elevators, they were of different models with different dimensions and structural design.
increase depending on the type of drivemotor used, • Values obtained with the various elevators show a large variation (2.85 to 25.5 mT) depending on the type of the motor drive and design; however, these values all comply with relevant standards,
Figure 1. Magnetic flux density values in a threedimensional magnetic layout measured on a plane 0.5 m above the motor drive
In practice, field strength depends on the surface area, distance from the surface and on the voltage level compared to ground potential. The test results supported these conclusions since, although I recorded different values, I measured values which were constant in time irrespective of the load. The measured values varied considerably; however, they conformed to standards. Magnetic field component is generated in the environment of the equipment under power if current flows through it. This is reflected in measured and recorded values of magnetic flux density as a function of time.
Figure 2: Changes in the flux density of the magnetic field over time in case of the frequency converter-fed elevator drive travelling upwards (EU) and downwards (ED) with empty cabin
Based on the evaluation of the diagram, the following conclusions can be reached: • During the period of the motor drive starting, the flux density increases over the environmental value with the degree of the
• Of the two load states, downward motion of an empty cab usually produces greater maxima which can be explained by the higher currents associated with this operating state, • The peak values of the curves signal the starting and the stopping of the motor associated with a start-up surge, and another current surge which is the result of braking (brake magnet operation). Measured maximum values do not exceed the health limit values for electric field strength and magnetic flux density stipulated by prescriptions of different sources, like the European Union, Germany, Sweden, Russia, Hungary. Values of the field components decrease with increasing distance from the elevator machine, therefore it can be stated that, radiation harmful to the health are present neither in the machine room, nor in the cabin used by the passengers. V. Summary of results I have proved through the spatial mapping of radiated disturbance emitted by asynchronous motor elevator drives that, spatial distribution of magnetic flux density and maximum flux density values are not affected by the working principles of elevator drives. These parameters are jointly determined by the type and power of the elevator motor, by the structure of stator and rotor windings and insulation, by the type of the brake magnet and by the load condition I have shown that of the spatial components of the disturbance radiated by asynchronous motor drives, the electrical component being constant in time, and independent of the load condition as well as the magnetic component depending on the parameters and load condition of the drive motor meet the prescriptions. State of the art frequency converter fed and alternating current 73
chopper fed elevators emit no disturbances harmful to the health.
References
[1.] Apatini K., Bérces G., Horváth I., Makovsky G., Némethy Z., Tarnik I., Operation and repair of elevators and escalators (on Hungarian), Műszaki Könyvkiadó, Budapest, 1983. [2.] Barabás M., Electric machines I (on Hungarian), Műszaki Könyvkiadó, Budapest, 1981. [3.] Barney G. C., Elevator Electric Drives, Ellis Horwood, 1990. [4.] Bimal K. Bose, Power Electornics and Variable Frequency Drives, IEEE PRESS, New York 1996. [5.] Dán A., Tersztyánszky T., Varjú Gy., Electric energy quality (on Hungarian),, InvestMarketing Bt., Budapest 2006. [6.] Fischer R., Elektrische Maschinen, Hanser Verlag, Leipzig, 1999. [7.] Fodor Gy., Electromagnetic fields (on Hungarian), Műegyetemi Kiadó Budapest, 1998. [8.] Gemeter J., Farkas A., Nagy L., Electric machines (on German), BMF KVK 2043, Budapest, 2007. [9.] Halász S., Automated electric drives I. (on Hungarian), Tankönyvkiadó, Budapest, 1989. [10.] Halász S., Hunyár M., Schmidt I., Automated electric drives II. (on Hungarian), Műegyetemi Kiadó, Budapest, 1998. [11.] Hunyár M., Kovács K., Németh K., Schmidt I., Veszprémi K., Energy saving and network friendly electric drives (on Hungarian), Műegyetemi Kiadó, Budapest, 1997. [12.] Kazmierkowski M. P., Tunia H., Automatic Control of Converter-fed Drives, Elsevier Science Publishers B. V., Warszawa, 1994. [13.] Pálfi Z., Electric drives (on Hungarian), Műszaki Könyvkiadó, Budapest, 1979. [14.] Schröder D., Elektrishe Antriebe 2., Regelung von Antrieben, Springer, Berlin 1995. [15.] Tóth F., Danfoss Drives: VLT Low Harmonic Drive, Magyar Elektronika, 2009/7-8, p. 31. [16.] Vogel J., Elektrische Antriebstechnik, Hüthig GmbH, Heidelberg, 1998.
74
SÉRÜLT SZEMÜREG REKONSTRUKCIÓJÁNAK TERVEZÉSE Markella Zsolt1, Vizkelety Tamás2 1. Műszertechnikai és Automatizálási Intézet, Óbudai Egyetem, Kandó Kálmán Villamosmérnöki Kar,
Budapest, Magyarország
email: markella.zsolt@kvk.uni-obuda.hu
2. Arc-Állcsont-Szájsebészeti és Fogászati Klinika, Semmelweis Egyetem, Budapest, Magyarország
Absztrakt - Bevezetés: A fejlődési rendellenesség vagy baleset miatt fennálló arcdeformitások diagnosztizálásakor a 3 dimenziós képalkotások áttörést jelentettek. Korábban vagy szemből vagy oldalról készült röntgen felvételen azonosították a mérőpontokat. A röntgen felvételen a pontok egymásra vetülnek, tehát egy oldalirányú felvétel esetén nem lehet eldönteni melyik pont van a fej jobb és melyik a bal oldalán. A fogászati célra kifejlesztet ConeBeam CT berendezések közt található olyan készülék ami alacsony sugárterhelés mellett közel a teljes fej befogására alkalmas térfogatot képes 0,3 mm-es felbontással leképezni. Az általunk fejlesztett számítógépes program segítségével diagnosztizálni tudjuk a fej deformitását és meg tudjuk tervezni a helyreállító műtétet. Célkitűzés: Balesetben sérült szemüreg rekonstrukciós műtétéhez számítógéppel készített sablon előállítása. A sebésznek, ne a műtőben kelljen a csont pótlásra használt titán hálót méretre vágnia és hajlítania. Anyag és módszer: A szemüreg nem egy zárt alakzat, több anatómiai nyílás található határon. A mérésére kifejlesztettünk egy módszert ami elfogadható mennyiségű beavatkozást igényel. A eljárás lényege, hogy 4,8 mm-enkénti szeleteket vizsgálunk, szeleteket sztenderd pozíciójú induló szelettel vesszük fel. Az így elkészített drótvázas modell alapján kinyomtatjuk a sérült szemüreget alkotó csontfelszínt.
a csontos manuális a vizsgált szemüreg
Az ép szemüreg modelljét az arc középsíkjára tükrözése után szintén kinyomtatjuk. A hiányzó csont pótlására szolgáló titán hálót a tükrözött szemüreg alakjára lehet hajlani majd a sérült szemüreget alkotó csontfelszín modelljére helyezve méretre vágható. Eredmények: Túl vagyunk az első sikeres műtéten. Az elkészült 3D print segítségével a műtét előtt méretre vágott és meghajlított titán hálót probléma mentesen tudták használni a műtőben. Következtetések: A 4,8 mm-enkénti szelet vizsgálatot javasolt megfelezni. Így még
kezelhető mennyiségű lesz a feldolgozandó adat, de pontosabb modellt kapunk.
Kulcsszavak: CBCT, szemüreg I. Bevezetés A fejlődési rendellenesség vagy baleset miatt fennálló arcdeformitások diagnosztizálásakor a 3 dimenziós képalkotások jelentették az áttörést. Korábban vagy szemből vagy oldalról készült röntgen felvételen azonosították a mérőpontokat. A röntgen felvételen a pontok egymásra vetülnek, tehát egy oldalirányú felvétel esetén nem lehet eldönteni melyik pont van a fej jobb és melyik a bal oldalán. A szemből készült röntgen felvételekre ugyanez a probléma szintén
jelentkezik, csak az irány más, azaz, itt a kérdés az, hogy előrébb vagy hátrébb helyezkedik el az adott pont. A fogászati célra kifejlesztet ConeBeam CT-k közt található olyan készülék ami alacsony sugárterhelés mellett közel a teljes fej befogására alkalmas térfogatot képes 0,3 mm-es felbontással leképezni. Az általunk fejlesztett számítógépes program segítségével diagnosztizálni tudjuk a koponya deformitását és meg tudjuk tervezni a helyreállító műtétet.
75
II. Célkitűzés A páciens koponyájáról készült CBCT felvételen standardizált módon körül határolható legyen a szemüreg. Féloldalas deformitás esetén az épp szemüreget az arc középsíkjára tükrözve rá lehessen illeszteni a sérültre. Elvárás, hogy ki lehessen nyomtatni a sérült szemüreget alkotó csontfelszínt, valamint az ép szemüreget határoló csontfelszínnek az arc középsíkjára tükrözött reprezentációját. Azért, hogy a nyomtatást tetszőleges 3D nyomtatóval el lehessen készíteni az ipari gyakorlatban széleskörűen használt STL formátumban készüljön az alakzatok leírása. III. Anyag és módszer A szemüreg nem egy zárt „alakzat”, több anatómiai nyílás található benne egyénenként eltérő helyen. 1. Ábra Szemüreg
A határoló csont vastagsága szintén változatos és ebből következően a sugárelnyelő képessége sem egyenletes. A fentiek miatt a csontfelület kijelölése nem oldható meg egyszerűen a csont felszín beazonosításával, mindenképpen a kezelő beavatkozása szükséges a csont folytonossági hiányainak kezelése miatt. Létezik a szemüreg térfogatának mérésére olyan mérési eljárás amikor az egyes CT szeleteken az anatómiai lyukak két szélét a kezelő egy egyenes vonallal összeköti, és ezt teszi a szemüreg elülső felszínével is. Az így kapott immár zárt alakzatok területétét szeletenként lemérik és az így kapott értékeket a szelet vastagsággal megszorozva megkapható a szemüreg térfogata. Ennek a módszernek a legnagyobb problémája, hogy a szemüreg elülső pereménél egy szubjektív határvonalat képez amivel ráadásul a számunkra fontos csontfelszínek esnek ki a vizsgálatból. A következő mérési módszert dolgoztuk ki, ami elfogadható mennyiségű manuális beavatkozást igényel. A CBCT felvételeknél feltételeztük és a vizsgálat során ellenőriztük is, hogy a páciens koponyájáról készült felvételen az arc középsíkja jó közelítéssel a sagitális síkba esik. A vizsgálónak ki kell jelölnie mindkét szemüreg laterális szélének legdorzálisabb pontját. A két térbeli pont közül vesszük a dorzálisabbat és ezen pont által kijelölt koronális síkban valamint 76
ezzel a síkkal dorzális irányban 4,8 mm-enként párhuzamos síkokban vizsgáljuk meg a szemüreg metszeteit. 1.TÁBLÁZAT MAGYARÁZATA
ORVOSI
IRÁNYOK
Orvosi irányok magyarázata Elnevezés Magyarázat k o r o n á l i s homlokkal párhuzamos sík sík sagitális sík a felülről lefelé és az elölről hátra irányok határozzák meg laterális horizontálisan (jobbra - balra) ventrális
középvonaltól távolabb eső sagitálisan (elölről - hátra)
dorzális
has felé eső sagitálisan (elölről - hátra)
mediális
hát felé eső horizontálisan (jobbra-balra) középvonalhoz közelebb eső
2. Ábra Vizsgáló síkok
Tehát a módszer lényege, hogy 4,8 mm-enkénti szeleteket vizsgálunk. A CBCT felvételek 0,3 és 0,4 mm-es VOXEL mérettel állnak rendelkezésünkre, így adódott a 4,8 mm-es távolság, ugyan 1,2 mm lenne a legkisebb közös többszörös - ebben az esetben szemüregenként több mint 50 db felvételt kell elemezni – ezért a kezelői beavatkozás elengedhetetlensége miatt csökkentettük a vizsgált szeletek számát. A nagyszámú szelet feldolgozásakor többször van lehetősége a vizsgálónak tévedni és a fáradás ráunás sem elhanyagolható kockázat. Így alakult ki a 4,8 mm-enkénti szeletek vizsgálata. A vizsgált
szeleteket sztenderd pozíciójú induló szelettel vesszük fel. A kiindulási pontot (szemüreg laterális szélének legdorzálisabb pontja) a következőképpen határoztuk meg: mivel a szemüreg ventrális irányból teljesen nyitott és az üreg lezárása nagyon esetleges vesszük azt az első koronális szeletet ahol már körkörösen csont határolja a szemüreget, ezért kell a két referencia pont közül a dorzálisabbat alapul venni mert abban a síkban már mind a két oldalon zárt a szemüreg-bemenet csont kontúr. Törekedtünk arra, hogy egyszerű, viszonylag kis felhasználói beavatkozást igénylő módszert használjunk, ami minimalizálja a felhasználó szubjektivitásának hatását. A vizsgálónak kell megadnia az adott szeleten a szemüreg metszetének középpontját, mivel a szemüreg tengelye dorzális irányban haladva mediális irányban fut. A szemüreget kitöltő anyag denzitása nem homogén, sőt a CBCT felvételek készítésének módjából adódóan felvételenként változhat ezért a kijelölt középpont körül egy 11x11 képpontos négyzetben lévő képpontok intenzitásának átlagát fogjuk tekinteni a szemüreget kitöltő anyag intenzitásának. A szemüreg szélét meghatározó algoritmus úgy működik, hogy a középpontból elindulva megkeresi azt az intenzitást ahol a szemüreget alkotó anyag intenzitásától akár pozitív akár negatív irányban jobban eltér az adott pont intenzitása mint a programban megadható megengedett maximális eltérés. Alapértelmezésként 25% a megengedett maximális eltérés, de ennek értéke állítható. A pozitív irányú eltérést a csontok jelentik, a negatív irányút a lágyabb kötőszövetek. Az ezzel az eljárással készített kitöltés az anatómiai lyukaknál sajnálatos módon „kifolyik” a szemüregből. A „kifolyás” megakadályozására lehetősége van a vizsgálónak körberajzolni a szemüreget. Azonban nem ez a körberajzolás lesz a szemüreg határa, a körberajzolásnak nem is kell teljesnek lennie elegendő csupán a folytonossági hiányokat kitöltenie. A szeleteket egymás mellé téve egy csonka kúpszerű alakzatot kapunk, ez az amorf test 4,8 mm távolságra lévő, de még 0,3 vagy 0,4 mmes felbontású csipkézett szélű szeletekből áll. A szeletek határolóvonalának finomságát a szeletek közti távolság felbontásának nagyságrendjébe az alábbi módszerrel csökkentjük. A szeletek
kerületét felosztjuk 32 részre 12 órától indulva az óramutató járásának megfelelően, egyenletes szög távolságra végezve a felosztást. A szemüreg egyes szeleteinek így kapott mérőpontjai segítségével létrehozunk egy drótvázas szemüreg modellt.
3. Ábra Szemüreg drótvázas modellje
A sérült szemüreget alkotó csontfelszínt úgy tudjuk előállítani, ha egy kockából eltávolítjuk a szemüreget alkotó alakzatunkat és a maradékot nyomtatjuk ki. Az ép szemüreg modelljét az arc középsíkjára tükrözése után szintén kinyomtatjuk. A nyomtatáshoz az STL fájlformátumban meg kell adni egy zárt alakzatot alkotó kifelé irányuló háromszögeket. A mérőpontokból a háromszögek előállítása nem túl bonyolult, de számos buktatója van a feladatnak. Például az alakzat tükrözésekor kifordulnak a felszínek. A szemüreget alkotó csont felszínének képzésekor – szemüregnek a kockából való kivonásakor – meg kell határozni a kocka csúcsait és a kocka élein segéd pontokat, hogy mind a konkáv mind a konvex felületrészek leírhatóak legyenek háromszögekkel.
77
4. Ábra Szemüreget alkotó csontfelszín
A hiányzó csont pótlására szolgáló titán hálót a tükrözött szemüreg alakjára lehet hajlani majd a sérült szemüreget alkotó csontfelszín modelljére helyezve méretre vágható. IV. Eredmények Túl vagyunk az első sikeres műtéten. A CBCT felvétel és a kinyomtatott modell tanulmányozása után meg tudták határozni a műtőben követendő lépéseket. A műtét két alapvető lépésből állt. Először a járomcsontot kellett felszabadítani, hiszen korábban nem megfelelő pozícióban volt. Ezután az előre meghajlított titán hálót a helyére illesztették és a hálót és a járomcsontot is csavarral rögzítették. Az elkészült 3D print segítségével a műtét előtt méretre vágott és meghajlított titán hálót probléma mentesen tudták használni a műtőben. V. Következtetések A kinyomtatott modelljeink a nagy mértékű változásoknál meglehetősen durva felszínűek, ezért a 4,8 mm-enkénti szelet vizsgálatot javasolt megfelezni. Így még kezelhető mennyiségű lesz a feldolgozandó adat, de pontosabb modellt kapunk. Az ennél nagyobb felbontással éppen a módszer egyszerűsége vész el és a feldolgozás időigénye indokolatlanul megnövekszik.
78
TEREPI REGISZTRÁLÓ MŰSZEREK HIDROLÓGIAI ÉS GEOLÓGIAI MÉRÉSEKHEZ Molnár Zsolt Óbudai Egyetem Kandó Kálmán Villamosmérnöki Kar, MAI Budapest, Magyarország email: molnar.zsolt@kvk.uni-obuda.hu
Összefoglalás: A cikk két regisztráló műszert mutat be. Az első egy olyan mérő- és regisztráló készülék, amely természetes vízgyűjtők hidrológiai tulajdonságainak vizsgálatához szolgáltat adatokat. A Keleti-Kárpátokban, 2009 nyarán 18 mérési ponton telepített műszerek két éves fejlesztés és tesztelés után sikeresen regisztráltak több mint három hónapi eseményt, meghibásodás nélkül. A második regisztráló ennek a készüléknek a kifejlesztése és üzemeltetése közben szerzett tapasztalatokra épülve készült el, és geológiai kutatásokhoz használható adatokat szolgáltat. Ez a készülék 2010-12 között több kísérleti mérés során megfelelően működött. A cikkben ennek a két műszernek a tervezési-fejlesztési folyamatát, a felmerülő problémákat és azok megoldását, valamint a szerzett tapasztalatokat ismertetem. Kulcsszavak: folyadékszint mérése, vízszintmérés, csapadékmérés, terepi regisztrálás, hidrológiai mérés, geoökológiai mérések I. Bevezetés 2007-ben keresett meg az ELTE TTK Természetföldrajzi Tanszékének egyik PhD hallgatója azzal a problémával, hogy vízgyűjtők hidrológiai tulajdonságainak vizsgálatához szüksége van regisztráló műszerekre. A készülékek feladata, hogy több ponton rögzítsék egy folyó, annak mellékfolyói, illetve a betorkolló kisebb vízfolyások vízszintváltozásait, több hónapon keresztül. Emellett szükség volt a vízgyűjtő terület csapadékmennyiségének és csapadékintenzitásának mérésére is. A feladat érdekesnek és nagy kihívásnak bizonyult, mivel nem mellékes szempontként az olcsóságot is célul kellett kitűzni. A következőkben a fejlesztést mutatom be – a lehetőségek vizsgálatától a köztes megoldásokon keresztül a végleges készülékig -, amely odáig vezetett, hogy 2009 nyarán megfelelő mérési eredményeket produkált. A mérési eredmények és az azokból levont következtetések szakmai körökben nagy elismerést váltottak ki. II. Mérési alapelvek A folyadékszint mérése az egyik leggyakoribb műszerezési-méréstechnikai feladat. Számos, az
iparban régóta alkalmazott megoldás létezik, amelyek különféle pontosságú és felbontású szintmeghatározást tesznek lehetővé. Ezeknek nagy része azonban drága, másik része nem alkalmazható kis fogyasztású, felügyelet nélküli terepi méréseknél. A következőkben röviden bemutatom az iparban leggyakrabban alkalmazott folyadékszint-mérési eljárások (kapacitív, hidrosztatikus, ultrahangos, elektromechanikus, radaros, magnetostrikciós) megvizsgálása után [1] a rostán fennmaradt két, alkalmazott módszert, és röviden elemzem az adott feladatra való alkalmasságukat. Kapacitív elvű mérés A kapacitív elvű mérésnél azt használják ki, hogy két fémtárgy között kapacitás mérhető. A két fémtárgy a kondenzátor két fegyverzete, a köztük lévő szigetelő közeg pedig a dielektrikum. Ha a két fémtárgy egy-egy azonos felületű, sík, párhuzamosan elhelyezkedő fémlemez (síkkondenzátor), akkor közöttük a kapacitás értéke: A C = µ0 ⋅ µr ⋅ (1) d ahol e0 a vákuum dielektromos állandója, er a 79
köztük lévő szigetelőanyag relatív dielektromos állandója, A a fémlemezek felülete, d pedig a köztük lévő távolság. Ha a két fegyverzet geometriája és egymáshoz való elhelyezkedése állandó, akkor a mérhető kapacitás értéke csak er-től függ, ezt használják ki a kapacitív szintérzékelésnél. Az érzékelő kialakításánál a tartály fémfala az egyik fegyverzet, a tartályba merülő fémrúd pedig a másik. Ha a mérendő folyadék vezetőképes, akkor a tartály belső felületét, illetve a bemerülő fémrudat szigetelővel kell bevonni. A folyadék szintjének változása azt eredményezi, hogy a kapacitás megváltozik, értéke a szint emelkedésével növekszik. A kapacitív elvű szintmérés előnyei [2]: • nagypontosságú mérést tesz lehetővé (akár jobb, mint ±0,5%) • elektromosan szigetelő anyagokhoz is használható
és
vezető
• folyadékok, masszák, poros, darabos anyagok mérésére is alkalmazható • olcsó • nagy nyomáson és magas hőmérsékleten is alkalmazható A kapacitív elvű szintmérés korlátai, problémái: •
er > 1,5 esetén alkalmazható
• szigetelő anyagú segédszonda szükséges
tartály
esetén
• a mérendő közeg összetételének változása befolyásolja a mérési pontosságot • a fegyverzeteken lerakodó szennyeződés rontja a pontosságot. A kapacitív mérési elvet a csapadék- és vízszintmérés céljára alkalmassá teszi az olcsóság és az egyszerű kivitelezhetőség. Az elvileg magas mérési pontosságot ronthatja a vizsgált közeg (folyó/patak vize) összetételének változása (oldott anyag, lebegő szilárd szennyeződés). A kapacitás-változás többféle módon mérhető, például: • egy konstans feszültségre feltöltött RCtag kisütési időállandójának megváltozása, 80
• oszcillátor-frekvencia megváltozása (ha C frekvenciát meghatározó elem), • kapacitás impedanciájának megváltozása formájában. Elektromechanikus szintmérés Az elektromechanikus szintmérés esetén általában a folyadékszint-változást elmozdulássá, az elmozdulást pedig elektromos jellé alakítják át. Elektromechanikus szintmérés számos változatban kivitelezhető. Egyik lehetséges megoldás az, amikor a mágnessel szerelt úszó reed-reléket kapcsol be, és ezeket a kontaktusokat érzékelik. Ha megfelelő sűrűséggel vannak elhelyezve a relék, akkor a folyadékszint változása kis hibával követhető. A megoldást a sérülékenysége, mechanikai behatásokra való érzékenysége (ütés, rezgés) miatt ritkán alkalmazzák. A relék átmérője határozza meg az elvileg lehetséges maximális felbontást. A mérés során járulékos hibákra nem kell számítani, csak a felbontás korlátosságából származó bizonytalanságra. Létezik olyan elektromechanikus szintmérési módszer is (bár ma már egyre ritkábban használják), amikor a szintváltozás hatására egy potenciométer ellenállása változik meg, ilyenkor a szintváltozást lineáris elmozdulássá vagy szögelfordulássá alakítják át. Az ellenállás kimenetet adó szintmérők egy másik típusa reed-reléket és ellenállásokat kombinál, attól függően, hogy a mágneses úszó melyik reed relét zárja rövidre, a kimeneten más-más ellenállás mérhető. Speciális megoldás a billenőlamellás szintjelző, ahol egy nívócsőben a szintet követő mágneses úszó van elhelyezve, amely mágneslapkás lamellákat billent át. A billenőlamellás szintjelző kombinálható szintkapcsolóval is. Az elektromechanikus szintmérések közé tartozik még a forgólapátos és a rezgővillás mérés, de ezek csak szintkapcsolásra használhatóak, azaz adott szint elérésekor megváltozó vagy megszűnő mechanikai mozgást érzékelve meg lehet mondani, hogy a vizsgált közeg szintje elért-e adott szintet vagy sem. Az elektromechanikus mérés előnyei:
• olcsó • régóta alkalmazott, bevált, könnyen megvalósítható megoldások • vannak megoldások, amelyek a felépítésből adódóan igen megbízhatóak (reedrelés)
°C
o
Bizonytalanság: legfeljebb ±0,5
•
Mintavételi időköz: maximum 5 perc
• Folyamatos üzemidő: minimum 3 hónap
Az elektromechanikus mérés korlátai, problémái:
• °C
Üzemi hőmérséklet tartomány: 0…50
•
egyes megoldásoknál a felbontás korlátos
•
PC-kapcsolat: RS232 port
• egyes megoldásoknál mechanikai kopás lép fel
A
• a folytonos elvű elektromechanikus átalakítóknál (pl. potenciométeres) előfordulhat öregedés Az elektromechanikus szintmérés olcsósága, kis helyigénye és sokféle változata mindenképpen amellett szól, hogy a számos lehetőség, megoldás közül néhányat kipróbáljunk. Terepi használatra (nedvesség, hőmérsékleti hatások), és megbízhatósági megfontolások miatt főként a reed-relés megoldások jöhetnek szóba. Ez a megoldás egyértelmű, határozott információt adhat a folyadékszintről, amely digitális rendszerben könnyen feldolgozható. A felbontás korlátossága miatt kétséges a reed-relés mérési elv alkalmazása, de kompromisszumként mindenképpen szóba jöhet.
probléma
elemzése,
a
lehetőségek
feltérképezése, és egy előzetes terv felvázolása után a specifikáció az alábbiak szerint változott, többnyire szigorodott.
(A
megrendelővel való
egyeztetés után minden változás elfogadásra került.)
•
Folyadékszint mérés:
o
Felbontás: 3,3 mm
o Mérési tartomány: 550 mm (csapadékszint mérőnél 500 mm) • Hőmérsékletmérés: o Bizonytalanság: tipikusan ±0,5 °C (maximum ±2 °C) •
Mintavételi időköz: 3 perc
• Tápellátás: 2 db AA méretű, 1,5 V kapocsfeszültségű alkáli elem
III. A műszerrel szemben támasztott
Magyarázat a kiindulási specifikációhoz képesti eltérésekre:
A kiindulási specifikációt a megrendelő alakította ki. A vízszintmérőt terepi, felügyelet nélküli, hosszú idejű mérésre kell megtervezni. Az adatok lekérdezése és mentése, a műszerek tesztelése, ellenőrzése és mérésre való felkészítése laborkörülmények között történik. A készülék (két köztes változat utáni) harmadik változatával szemben támasztott kiindulási követelmények:
• Folyadékszint-mérés felbontása: a használt reed-relé átmérője kb. 3,1 mm. Közöttük minimális távolságot kell hagyni a gyártási tolerancia (±0,1 mm) és a beültethetőség miatt, így adódott a 3,3 mm-es elhelyezési távolság, ami meghatározza a felbontást is.
követelmények, specifikáció
• Rögzítendő adatok: rekord időpontja, folyadékszint, hőmérséklet •
Folyadékszint mérés:
o
Felbontás: jobb, mint 5 mm
o Mérési tartomány: 600 mm (csapadékszint mérőnél 500 mm) • Hőmérsékletmérés:
• Mérési tartomány (folyadékszint-mérés): a költségek optimalizálása miatt figyelembe kellett venni az elérhető nyomtatott áramkör gyártók technológiai korlátait. Általában 500550 mm-es maximális hosszúságú vagy szélességű nyomtatott áramkört képesek legyártani, de sikerült találni olyan céget, amely felár nélkül vállalta a 600 mm hosszú áramkör legyártását. A 600 mm-es teljes hosszból 550 mm-en vannak elhelyezve a reed-relék, a maradék 50 mm az 81
egyéb elektronikai alkatrészek számára szükséges. Az így kapott mérési tartomány ugyan elmarad a kívánt értéktől, de költségben kedvező, és a gyakorlati tapasztalatok azt mutatták, hogy – a kivételes árhullámok kivételével – legtöbbször elegendő. • Hőmérsékletmérő bizonytalansága: a gyártók kínálatában a legtöbb olcsó, miniatűr félvezetős hőmérséklet szenzor nem precíziós mérésekre készült. Néhány olyan típust lehet azonban találni, amely a 0…50 °C tartományban statisztikailag (sok példány átlagos eredményét figyelembe véve) képes teljesíteni a kiírt követelményeket. A választott hőmérsékletérzékelő gyártói specifikációjában leírtak szerint a maximális hiba a teljes tartományra specifikálva ±2 °C, ez azonban azért nem okoz problémát, mert a hőmérsékletmérés elsődleges célja az esetleges fagyveszély megállapítása. • Tápellátásra nem volt előírás. Az egyszerűség és az olcsóság figyelembe vételével 2 db 1,5 V kapocsfeszültségű, AA méretű elem szolgáltathatja a tápfeszültséget. Általános alkáli elemek kapacitását figyelembe véve az előzetes számítások szerint a 3,5 hónapi üzem során a rendelkezésre álló teljes kapacitás 30%át használja fel az áramkör. Ez kellő tartalékot biztosít a szélsőséges körülmények esetén is (hőmérséklet, páratartalom, esetleg előzetes próbálgatások, stb.). IV. A végleges mérőműszer és a korábbi változatok problémáinak ismertetése
Amint azt már korábban is írtam, a készülék harmadik formájában lett véglegesen használható. Hardver szempontból az első változat kapacitív elvű érzékelő használatával került megtervezésre, a második és harmadik változat pedig reed-relés elven működött, illetve működik. Első változat: kapacitív érzékelővel Az első változatnál egy oszcillátor frekvenciáját beállító elemként használtuk fel a két, egymástól és a folyadéktól elszigetelten elhelyezkedő fémhenger alkotta kondenzátort. Ha a vízszint nőtt, akkor az oszcillátor frekvenciája csökkent. Az oszcillátor frekvenciáját a mikrovezérlő egyik erre a célra alkalmas hardver moduljával (CCP) 82
mértem. A hőmérsékletmérést egy tranzisztor B-E diódájának nyitófeszültség-változásának digitalizálásával (A/D modul) oldottam meg. A mért értékek egy, a mikrovezérlőhöz I2C buszon kapcsolódó FLASH memóriában kerültek eltárolásra. Az analóg egységek tápfeszültségét a mikrovezérlő alvó állapotban kikapcsolta, ilyenkor az áramfelvétel minimális volt. Az időalapot a mikrovezérlő egyik, alvó állapotában is üzemelő számlálója adta. Az első változat elektronikáját és az összeszerelt készüléket az 1. ábra mutatja.
1. ábra Az első változat elektronikája és az összeszerelt készülék
Telepítés során (2007 nyarán) különféle okok miatt csak néhány csapadékmennyiségregisztráló került ki a terepre. A tapasztalatok azt mutatták, hogy a szonda által képviselt kapacitást és az oszcillátor frekvenciáját erősen befolyásolták a környezeti hatások, főként a hőmérséklet. Bár laborkörülmények között kiváló eredmények születtek (2. ábra), a terepi mérés során kiderült, hogy a műszer lényegében használhatatlan. A kevésbé fontos mennyiség, azaz a hőmérséklet mérése megfelelően működött, de a folyadékszint mérése nem. A felvett, és kielemzett görbe egyértelműen mutatta a hőmérséklet-függést: napi ismétlődéssel hajnaltól kora délutánig emelkedett a regisztrált csapadékszint-érték, délutántól másnap hajnalig pedig csökkent. Emellett időnként hirtelen változások is történtek, amelyek oka máig ismeretlen. A műszerek mechanikai kialakítása sem sikerült tökéletesen: néhány elektronika beázott, a szondák egyes alkatrészei korrodáltak. A kudarcok hatására a kapacitív mérési elvről a megrendelővel közös elhatározással letettünk, bár elképzelhető, hogy gondosabb tervezéssel és kísérletezéssel „kisipari” körülmények között is sokat lehetett volna javítani az eredményeken.
2. ábra A kapacitív érzékelő laboratóriumban mért jelleggörbéje
A
második változat: reed-relékkel és mágneses
úszóval
Az új műszer fejlesztésénél teljesen új alapokra kellett fektetni az érzékelést. A reed-relés megoldás került látótérbe. Bár az irodalom-kutatás során azt láttuk, hogy az iparban nagyfelbontású mérésekre nem alkalmaznak reed-relés - mágneses úszós mérési elvet, mégis perspektivikusnak látszott. A fellelt alkalmazásokban általában egy vagy néhány pontos szintkapcsolási feladatokat oldanak meg így, de az elv alkalmazhatónak tűnt esetünkben is. Bár a nagyszámú relé alkalmazása problémákat vet fel (pl. szereléskor nagyszámú alkatrész és forrasztás, a relék mechanikai feszültségek és igénybevételek hatására sérülhetnek). Ugyanakkor előnyöket is hoz: egy relé vagy bekapcsolt vagy kikapcsolt állapotban van, kétállású a működése, nem hoz bizonytalanságot a rendszerbe, illetve ha megfelelően van kialakítva az elrendezés, nagy megbízhatósággal és környezeti hatásoktól függetlenül üzemeltethető. A műszerben a nagyszámú (alkalmazástól függően 160-185 db) relé ki- vagy bekapcsolt állapota shift-regiszterekkel léptethető a mikrovezérlőbe, összesen 3 láb felhasználásával. A beolvasás/ beléptetés néhányszor 10 μs alatt megtörténik, a shift-regiszterek, illetve az elektronika analóg egységei csak ennyi időre vannak bekapcsolva, különben energiatakarékossági megfontolásból tápfeszültség-mentesek. A hőmérséklet-érzékelés félvezetős hőmérsékletszenzorral került megvalósításra, kimeneti jelét a mikrovezérlő beépített 10 bites A/D modulja digitalizálja. Az A/D modul számára a referencia-feszültséget a tápfeszültség- és hőmérséklet-függés minimalizálására precíziós félvezetős feszültség-referencia került beépítésre. Mivel a mikrovezérlő órajelének előállítására a
mikrovezérlő belső RC oszcillátorát használtam, így a külső kvarc elhagyható volt. Az időalapot (aminek igen pontosnak kell lennie, hogy a sok műszer szinkronban tudjon működni) viszont nem lehetett többé az órajelből létrehozni. Ennek kiváltására azt a megoldást választottam, hogy I2C buszos egy valós idejű óra (RTC) percenkénti megszakításait számláltam, és így lehetőség nyílt az egyes készülékek óráinak szinkronba hozására, illetve – ha szükségessé válik a későbbiekben – akkor a mintavételezés abszolút időpontjának (dátum, óra, perc, másodperc) eltárolására is. A memóriaméretet meg kellett növelni, a panelen 2 db 512 kbites FLASH került elhelyezésre (ennél a változatnál 2 hónapos regisztrálási idő, és 5 perces mérési időköz teljesítése volt a feltétel). A műszeres mérésekhez kiválasztott terület a Tisza-vízgyűjtő egyik legcsapadékosabb régiója (Nagy-Szamos forrásvidéke, Radnai-havasok), amely – reprezentatív jellegéből adódóan – hasonló adottságú hegyvidéki vízgyűjtők hidrológiai folyamatainak vizsgálatára kiválóan alkalmas. A megrendelő, e műszeres mérések fontosságára egyik korábbi tanulmányában már felhívta a figyelmet [3]. A műszerek telepítésénél ismét felmerült néhány probléma (időkorlát, technikai akadályok, időjárás), így nem került az összes műszer telepítésre, de a telepített műszerek között volt csapadékíró, illetve vízszintregisztráló is. A mérés során – amely ebben az évben inkább a műszerek tesztelésére szolgált, mintsem hidrológia szempontból teljes körűen értékelhető eredmények produkálására – kiderült néhány kisebb-nagyobb hiba, amely részben specifikálási probléma volt, részben pedig programozási hiba. Ugyanakkor a tapasztalatok igen kedvezőek voltak: a mérési elv és a műszerek maximálisan bizonyítottak. A kihelyezett műszerek közül egynél sem volt sem elektronikai, sem mechanikai hiba, így már látszott, hogy az irány jó! A tesztelési eredményekről a megrendelő publikált [4]. Itt leírja, hogy a mérési eredmények jellegre helyesek voltak, sőt, egybe vágtak a közeli meteorológiai állomások által rögzített adatokkal (esőzések időpontja, csapadékmennyiség, léghőmérséklet trendje), valamint az árhullámok megfelelő 83
időpontban követik az esőzéseket, levonulásuk jellege is reális (3. ábra [4]).
5. ábra Egy csapadékregisztráló összeszerelve és az egyik telepített vízszint-regisztráló, üzembe helyezés előtt
V. A geológiai mérésekre szolgáló műszer kialakítása
3. ábra A patakok mentén kihelyezett vízszintregisztrálók vízállás-idősorai
A harmadik, végleges változat A harmadik változatnál nem kellett alapjaiban új műszert tervezni, minimális költségráfordítással kellett megoldani a felmerülő hibákat, hiányosságokat. Az árhullámok gyors levonulása miatt a mintavételi időközt 3 percre kell csökkenteni, ami a memóriabank megnövelését eredményezte, ugyanakkor ezzel némileg megnőtt a regisztrálható időtartam. A programban lévő egyetlen hiba kijavításra került, egy változó típusának hibás megválasztása miatt a memória címzését végző számláló túlcsordult, és az előző változatban a már felvett idősort felülírták a frissebb adatok. A mechanikai kivitelezésben is történtek kisebb módosítások, pl. LCD kijelző beépítése a mintavétel sorszámának terepi ellenőrzéséhez. A végleges műszer elektronikájának egyszerűsített blokkvázlatát 4. ábra mutatja.
A hidrológiai regisztráló készülék sikeres üzeme adott alapot a következő, szintén az ELTE TTK Természetföldrajzi Tanszékéről érkező felkérés teljesítésének. A kiindulási specifikációt itt is a megrendelő alakította ki. A regisztrálót – a hidrológiai regisztrálóhoz hasonlóan – itt is terepi, felügyelet nélküli, hosszú idejű mérésre kellet tervezni. A hidrológiai regisztrálóval ellentétben ezt a regisztrálót a terepi körülmények melletti adatlekérdezésre és -mentésre, és az akkumulátor cseréjére is fel kellett készíteni. A műszerek tesztelése, ellenőrzése továbbra is laborkörülmények között történik. A készülékkel szemben támasztott követelmények: • Rögzítendő adatok: rekord időpontja, hőmérséklet, talajnedvesség, talaj pH értéke • Hőmérsékletmérés: o
Pontosság: minimum ±0,5 °C
o
Mérési tartomány: -10…50 °C
o
Érzékelő: PT100
o
Csatornák száma: 3
• Talajnedvesség:
4. ábra A reed-relés műszer végleges változatának egyszerűsített blokkvázlata
2009 nyarán (június 16.–szeptember 19.) végre kihelyezésre kerülhettek a végleges változatú műszerek (5. ábra). A mérés teljes időtartama alatt valamennyi műszer hibátlanul működött, a mérési eredmények felülmúlták a várakozásokat. Összesen 18 mérési helyen megtörtént a csapadékmennyiség és az árhullámok 3 hónapon keresztül történő regisztrálása. 84
o
Pontosság: minimum ±5 %
o
Mérési tartomány: 0…100%
o
Érzékelő: gipszblokk
o
Csatornák száma: 3
•
pH érték:
o
Pontosság: ±0,2 pH
o
Mérési tartomány: 0…10 pH
o
Érzékelő: üvegcsöves pH elektród
o
Csatornák száma: 1
•
Mintavételi időköz: maximum 5 perc
• Folyamatos üzemidő: minimum 3 hónap • Üzemi hőmérséklet tartomány: -10… 50 °C •
PC-kapcsolat: USB port
A műszerhez az érzékelőket a megrendelő szállította, ezek tehát adottak voltak. 1. TÁBLÁZAT A MŰSZERHEZ HASZNÁLT ÉRZÉKELŐK Érzékelő típusa Érzékelő jellemzői Érzékelő típusa
Érzékelő jellemzői Típus
Gyártó
Mérési tartomány
Hőmérséklet
SE011
Picotech
-30...300 °C
Nedvesség
14.22.05
Eijkelkamp
0...100 %
pH
HI1053B
HANNA Instruments
0…12 pH
Az érzékelők adatlapját megvizsgálva, kiderült, hogy a hőmérséklet- és a pH-érzékelők megfelelő pontosságúak a specifikáció teljesítéséhez, de a nedvesség-érzékelő gipszblokk nem. A gyártó nem javasolja pontos mérésekhez, csak trendfelvételhez. A gipszblokkal való mérésnél több probléma is felmerül, amelyek a következők: • Az érzékelő erodál, használat közben „elfogy” • Jelentős hiszterézise van, főként a száradási szakaszban •
Érzékeny az oldott sókra
• Hőmérséklet-függő • AC mérőkör szükségeltetik, DC méréseknél a vízbontás miatt hamis eredmény keletkezik Mindezek ellenére a költségek ésszerű szinten tartása miatt ez az érzékelő-típus került felhasználásra, de a szolgáltatott eredmény csak tájékoztató jellegű lehet. Az AC mérőkör helyett DC mérőkör került kialakításra. Ez azért volt megengedhető, mert a mérés nem folyamatosan történik, hanem mintavételesen; több perces
időközönként mA nagyságú áram folyik az érzékelőn pár ms ideig. A PT100 érzékelő mérőköre egy egyszerű ellenállás-feszültség átalakító, különösebb problémát nem okozott a kialakítása. A pH érzékelő kimenő jelként feszültséget szolgáltat, amely természetesen arányos az érzékelt közeg pH értékével. Ez a feszültségérték hozzávetőlegesen a -500 mV...+500 mV tartományba esik, a megváltozás mértéke körülbelül 60 mV/pH (25 °C-on). Mind a kimeneti feszültség-tartomány, mind pedig a meredekség hőmérséklet-függő, ezért a PT100 érzékelőkkel mért hőmérséklettel korrigálni kell a mérési eredményt. A pH elektród kimeneti impedanciája nagy, ezért a rákapcsolódó áramkörnek nagyimpedanciásnak kell lennie. Az alkalmazott kapcsolásban egy 500 mV-os referenciafeszültséggel eltoljuk a mérendő feszültséget (így mindig pozitív kimeneti feszültség keletkezik mérendő feszültségként, amely az alkalmazott multiplexer és A/D átalakító miatt fontos). Az elektróda kimeneti feszültségét pedig egy nagy bemeneti impedanciájú, műveleti erősítővel felépített áramkör erősíti a kívánt jelszintre. A nagypontosságú analóg mérések érdekében nem a mikrovezérlőbe épített 10 bites A/D átalakítót használtam, hanem egy külső, 16 bites relatív nagy felbontású típust, amely már más készülékekben is kiválóan teljesített. A készülék többi áramkörének (RTC, memória, USB port, tápfeszültség-előállítás, energiatakarékosságot lehetővé tévő áramkörök) megtervezésénél ahol lehetett, ott a hidrológiai regisztrálónál bevált megoldásokat alkalmaztam, vagy pedig olyanokat, amelyek már más készülékekben sikeresen alkalmazásra kerültek (6. ábra).
6. ábra A geoökológiai regisztráló egyszerűsített blokkvázlata
85
A műszer úgy működik, hogy a készülék az alvó üzemmódból az RTC által létrehozott megszakítás hatására felébred, tápfeszültség alá helyezi a mérőköröket, a multiplexert, az A/D átalakítót, a memóriát és az egyéb segédáramköröket, majd elvégzi a mérést a csatornákon (maximum 7 darab) végiglépkedve, elmenti az adatokat, majd visszatér alvó üzemmódba. Ha a mérési eredményeket akarja a felhasználó kiolvasni, akkor a készüléket tápfeszültségmentesíteni kell (le kell húzni az akkumulátor sarukat), majd a készüléket egy PC USB portjához kell csatlakoztatni, és elindítani egy terminál-programot. Ezután a készüléket újra tápfeszültség alá helyezve, 10 másodpercig egy villogó LED jelzi, hogy a készülék parancsot vár az USB felől, illetve a terminál programban egy üzenet látható, a használható (egykarakteres) parancsok listájával (kiolvasás, memória-törlés, memória-teszt, stb.). Ha a regisztráló a 10 s alatt parancsot kap, akkor a felhasználót instruáló üzeneteket küld a terminál-ablakba, és a különféle tevékenységek könnyen elvégezhetőek. Amennyiben nincs beérkező üzenet a LED villogásának ideje alatt, akkor a készülék alvó üzemmódba kerül, és elkezdi az adatgyűjtést. A letöltés egy ASCII fájlba történik, az egyes elemeket pontosvessző, a rekordokat pedig soremelés választja el a CSV fájl szabályai szerint. A CSV fájl Microsoft Excelbe beolvasható, ott könnyen kiértékelhető. A regisztrálót a rácsatlakoztatott érzékelőkkel együtt kalibrálni kell. A kalibrálást laboratóriumi körülmények között a felhasználó végezheti, a készüléken semmilyen beállítás elvégzése nem szükséges. A kalibráláskor sűrített mintavétel történik, az etalon-mennyiségek mérési eredményei normális módon, CSV fájlként letölthetőek PC-re, és a későbbi regisztrátumok ezek alapján az eredmények alapján korrigálhatóak. A készülékkel több rövid illetve közepes idejű (max. 1 hónap) kísérleti mérés történt az aggteleki és a tapolcai karszton 2010, 2011 és 2012 folyamán, amelyek megfelelő eredményeket hoztak. Sajnos a mérési eredmények kiértékelése csak részben történt meg, illetve a további, 86
hosszabb távú, komolyabb tudományos értékkel bíró regisztrátumok felvétele is várat magára. VI. Összefoglalás,
értékelés,
továbbfejlesztési lehetőségek
A hidrológiai regisztrálóműszerek kiválóan vizsgáztak. Bebizonyosodott, hogy a reed-relés mérési elv alkalmas folyamatos, viszonylag nagy felbontású folyadékszint-mérési feladatok megoldására, és nem csak szintkapcsolásra. A projekt csaknem 3 éves fejlesztési munka után sikeresen lezárult. Ebben a három évben részemről az elektronika fejlesztése, a műszerek elektronikájának legyártása, a mérésre való felkészítése, az eredmények kiolvasása és a tanácsadás volt. Mindez szinte eltörpül a megrendelő, Galgóczy Zsolt elhivatottsága, maximalizmusa, szakértelme és elszántsága mellett, amellyel az ELTE doktoranduszaként óriási munkát végzett. Az ő feladata volt a mechanika kialakítása, legyártása, a telepítés és a begyűjtés, és természetesen – bár ez már nem a szűken értelmezett projekt része – a mért idősorok kiértékelése. A munka során kiváló munkakapcsolat alakult ki, amelyben gyakran egymást vittük előre ötleteinkkel, eredményeinkkel. A műszerre számos érdeklődő volt, több egyetemről illetve egyéb állami és nem állami intézménytől, így a szellemi tulajdon védelmére felmerült az ipari mintaoltalom alá vonás is. Továbbfejlesztési lehetőségként elképzelhető a távfelügyelet, például GSM modem segítségével lekérdezhetőek lennének az adatok, illetve hirtelen emelkedő vagy magas vízszint, esetleg erős csapadékintenzitás esetén riasztást lehetne generálni. Természetesen, ha nem számít az ár, azaz a költségek nincsenek erősen bekorlátozva, akkor számos ötlet, hasznos funkció illetve kényelmi kiegészítés is megvalósítható lenne. A geológiai kutatómérések céljára szolgáló regisztráló kísérleti üzeme megtörtént, és sikeresnek mondható. A készülék szoftvere megfelelően működött, elektronikáján kisebb átalakítások szükségesek a pontosabb mérés, és más, kedvezőbb paraméterekkel rendelkező talajnedvesség érzékelő típus használatának szükségessége miatt, ugyanakkor mechanikai kialakítása megfelelő.
Hivatkozások
[1.] Molnár Zsolt, „Regisztráló készülék vízgyűjtők hidrológiai tulajdonságainak vizsgálatához,” XXVI. Nemzetközi Kandó Konferencia, Budapet, 2010. [2.] NIVELCO ZRt., „A NIVOCAP kapacitív szinttávadó oldala,” http://nivelco. hu/site.php?upar=PRODUCT&pro_ id=10172&lang=hu [3.] Galgóczy Zsolt, „Morfometriai paraméterek vizsgálata a Nagy-Szamos forrásvidékén,” Földrajzi Közlemények 128 (1-4), pp. 89-103, 2004. [4.] Galgóczy Zsolt, „Csapadékintenzitás-mérő és vízszintregisztráló műszerek tesztelése egy hegyvidéki kísérleti vízgyűjtőn (Radnaihavasok),” Geográfus Doktoranduszok IX. Országos Konferenciájának Természetföldrajzos Tanulmányai, Szeged, 2009.
87
VIRTUAL REALITY BASED SIMULATION ENVIRONMENT FOR AUTONOMOUS INTELLIGENT ROBOTS Tibor Malkó, Peter Megyeri University of Pécs, Pollack Mihály Faculty of Engineering and Informatics
89
90
91
INTRODUCING S3D-TV INTO THE „REPETITORIUM FERNSEHTECHNIK“ Peter Möhringer University of Applied Sciences, Würzburg-Schweinfurt D-97421 Schweinfurt, Ignaz-Schön-Str. 11 Germany email: peter.moehringer@fhws.de
Abstract: Though, 3D cinema and 3D television technique is well known since many years, thecurrent trend to 3D cinema productions and to large home TV monitors caused more than a 3D renaissance. 3D movies fill cinema halls and an ncreasing number of home users watch 3D TV and 3D videos. In teaching the subject “television systems”, this challenge shall not only be met by a new chapter in a book, but also by a new part of our “Repetitorium Fernsehtechnik”. The described new part shall consist of a collection of pictures and video clips interactively explaining the special demands in 3D shooting, post production and distribution. The paper will outline selected problems and present some first results. I. Introduction The HTML based learning tool “Repetitorium Fernsehtechnik”, started in 1999, was updated and enhanced with new learning material step by step. It is used for self learning and repeating the stuff by students as well as it is used to support lessons with animated demonstrations and transparencies. The current trend to stereoscopic television and video was the reason for a new chapter S3D-TV. [1,2]
Overlapping objects, the overlapped object is farther away. (Stone and monument in fig. 2) 3. Linear perspective: Parallel lines converge with increasing distance. (Street and pavement in fig. 1)
II. The perception of depth or “what makes pictures 3D?” There are different types of depth cues, allowing an appreciation of location of objects in a picture. Monocular cues, using only one eye, already give a lot of information about the distance of objects. But, only the lateral displacement of the eyes, which yields two images from slightly different angles, allows sharp stereoscopic depth
4. Aerial perspective:
discrimination. [3,4,5]
5. Light and shade:
A. Monocular cues:
Give information about depth.
1. Relative size:
6. Monocular movement parallax:
Objects of the same size appear smaller in the distance. (People in fig. 1)
Driving in a car, near objects seem to move fast backwards, distant objects seem to accompany the car.
2. Interposition:
Fig. 1 Examples of depth cues
Distant objects appear more blue, blur indicates great distance. (Background in fig. 2)
93
Fig. 2 Examples of depth cues
B. Binocular cue: 1. Disparity: Due to the lateral displacement of the eyes, the images of the same object perceived by the left and the right eye are under a slightly different angle, leading to retinal disparity. This disparity related to the distance of the object allows a sharp appreciation of the distance. In case of very close objects, the convergence causing crossed eyes enhance this effect. The influence of the cues is listed in table 1.
Table 1 Influences of the cues Obviously the stereoscopic disparity is only one of seven cues, and its influence is limited to distances less than 30m. The human’s depth perception depends on all cues and contradictions causing paradoxes have to be avoided, thoroughly. III. S3D-TV The Stereoscopic 3 Dimensional Television system, S3D-TV, beside the monocular depth cues, also uses the binocular depth cue. In fig. 3, the shooting
94
Fig. 3 Two cameras shooting S3D pictures
of S3D pictures is explained. We use two cameras mounted with a small lateral displacement, acting similar to the human eyes. The views of the cameras are provided for the right and left eye, respectively. Obviously, only the triangle area is covered by both cameras. It is the save acting area, where both cameras and thus, both eyes, will see the same objects. Solely, the save acting area shall be used by the actors, if not, the effect of retinal rivalry occurs. The cameras are focused on the virtual screen’s distance and the pictures will be reproduced on a screen or a display in this position for the viewing audience. In our figure, two objects, a star and a circle are located in different distances. The star is “behind” the screen, so on the screen a positive disparity occurs. The star seems shifted to the right for the right eye and to the left, for the left one. This holds for all objects behind the screen. Objects in front of the screen, also show a lateral disparity, but the shifting direction is changed, negative disparity. Only objects on the screen position have no disparity. To preserve the desired perception of depth, the relation of display`s diameter, distance and disparity has to be adapted appropriately. IV. Traps and problems A. Different objects To preserve comfort in the audience, some restrictions are necessary. Pictures of different objects provided to the eyes will cause a confusion in brain, leading to superposition, suppression or retinal rivalry. Different objects,
for instance, a horizontal line being presented to one eye, a vertical line to the other, will shortly evoke vertigo and headache in the watching audience. B. Cue paradox Cue paradoxes occur, if different depth cues result in opposite interpretations. A typical example is the picture of a ball, kept in fixed size with changing disparity. The disparity announces the ball’s distance is changing, but the missing size change contradicts. Another, often appearing paradox, is an object reaching out of the screen towards the viewer, but, being cut by the screens border. The disparity creates an assumption of vicinity, while the cut is located at the screen’s position.
an object in the foreground, outside the save acting area, only being in one camera’s view will lead to flicker at the position of the object. Or, an object reaching out of the screen is cut by a horizontal window border. The disparity yields: object in front of the screen, but it is bounded by the frame at the windows position. See the dog’s head, it is obviously reaching out of the window, but the forehead seems to be behind the screen.
C. Disparity and crossed eyes (Strabismus) Watching distant objects, the viewer is looking straight out, see fig. 4. a), and he will be looking with convergent crossed eyes, while watching objects in the save acting area. Short distances demand strongly crossed, middle distances, slightly crossed eyes. Both kinds of looking are familiar to the human vision system, however, strongly crossed eyes are tiring and should not be forced by the films dramaturgy for too long. In spite of that, the situation in fig. 4.c) is a nuisance. It only happens in case of disparity being out of range and forces divergently crossed eyes. Crossed eyes looking outwards have no reasonable counterpart in human’s experience and cause headache and vertigo very fast. It may occur by mistake, if S3DTV is displayed on large screens and has to be avoided in any case.
a)
b)
c)
Fig. 4: Eyes watching objects a) distant, b) near, c) disparity to big
D. Window violation As mentioned already, objects truncated by the
Fig. 5 Tommy the dog reaching out to the camera
E. False stereo Last, but not least, false stereo, i. e. left/right exchanges have to be avoided. Wrong shutter sync, red/cyan mistakes or wrong polarity filters lead tocontradictions and will cause vertiginous feeling. V. Examples For examples please refer to the “Repetitorium Fernsehtechnik” and find: * Presentation of monocular and binocular depth cues. * Presentation of paradoxes in pictures and clips. VI. Conclusion As it was explained and exemplarily shown, S3DTV requires a number of new considerations. Already, while shooting the take, some restrictions like the save acting area and the observance of cues’ influences have to be taken into account. In postproduction and adaption to screen sizes, especially the divergently crossed eyes have to be avoided and a strong convergence of the eyes should also be only a short time effect.
window borders cause problems. For example, 95
VII. Acknowledgment The author expresses his great gratitude to Hilmar Endres and to Sebastian Gehrig for shooting the pictures and videos as well as for editing and for support in producing the presentation.
References
[1.] [1] Repetitorium Fernsehtechnik, www.fh-sw.de/ sw/fachb/et/labinfo/v/ repetitorium/repetito.ium/ einleit.htm [2.] Möhringer, P.; Rauch, A.: “A Practical Approach to Digital Television by Analysing MPEG-2 Coded Video”, XIX. Conference SIP, Osijek 2003 [3.] Hottong, N.; Walter P.: “S3D-HDProduktion Teil 1”, FKT 11/2010, 542-552 [4.] Hottong, N.; Walter P.: “S3D-HDProduktion Teil 2”, FKT 3/2011, 125-130 [5.] Tauer, H.: „Eine neue Einheit für Stereo- 3D“, FKT 7/2012, 351-357
96
POWER QUALITY INDICES OF THE FIRST ON THE GROUND PV POWER PLANT IN EASTERN CROATIA Srete Nikolovski, Zvonimir Klaić, Krešimir Fekete Power System Department Faculty of Electrical Engineering HR-31000 Osijek, Kneza Trpimira 2B. Croatia email: srete@etfos.hr
Abstract: The paper present sample case of the first photovoltaic (PV) power plant SEG1, in Eastern Croatia. Power quality indices of PV power plant SEG1 were measured before and after connection of PV power plant on distribution network. SEG1 PV power plant is the first installation on the land in Slavonija and Baranja, and uses area of 2000 m2. Installed power of PV power plant is 30 kW and PV power plant is connected on low voltage side of transformer station Osijek 4, TS 20/0,4 kV, by use of SUNNY TRIPOWER 15000TL power inverter. Power quality indices for SEG1 PV power plant were measured using three phase network power analyzers and presented according to the Croatian norm HRN EN 50160 which is in accordance with European norm EN 50160. Evacuation of active and reactive power from PV power plant into the distribution network was simulated and presented using DIgSILENT PowerFactory software. Short circuits ratio, between three phase short circuit at the point of common coupling (PCC) and PV plant rated power is also checked in accordance to HEP National grid code and the influence of SEG1 PV power plant on the distribution network is also analyzed. Real records of some power quality indices are presented. I. Technical data of the pv power plant
First ground-mounted photovoltaic power plant in Eastern Croatia (near town Osijek) was installed in 2011. The system is composed of 144 PV modules (rated power 205 W each [1]) connected to eight strings. Fig. 1 shows used PV module and power inverter [1][2]. Dimensions of the one PV module are 1663 mm × 998 mm × 35 mm. The whole installation is spread on the area of 2000 m2. Every four strings are then connected to a string inverter [2] as shown in Fig. 2. There are two inverters with rated power 15 kW each.
Fig. 2. Layout of the studied PV plant
Fig. 1. PV module and power inverter used in PV plant
The inverters are then connected via electrical switchboard (the metering devices were located there) to the low voltage side (0.4 kV) of the distribution transformer station Osijek 4, TS 20/0.4 kV. Fig. 2 shows layout of the PV power plant. Expected annually production of PV 97
plant is about 38 MWh [3].
under normal operating conditions [5].
II. General about power quality Electrical energy is a product which quality depends not only on the elements which go into its production, but also on the way in which it is being used at any instant, by the equipment of multiple users. Electrical equipment has become progressively complex due to the use of microprocessors and electronic devices in PCs, TV apparatus, video recorders, controllers, converters invertors and other industrial devices and the way it interacts with other electrical equipment. As a result of that complexity, some types of nowadays electrical equipment are more sensitive to deviations from sinusoidal supply voltage. At the same time, the same or other equipment cause modifications to the characteristics of the supply voltage [4].
III. Power quality measurement In the measurement campaign Power Quality Analyzer of class A is used. The measurements are taken at the point where the PV plant is connected to the distribution grid as shown in Fig 2. Power quality is measured in two periods:
The main parameters of supply voltage are: voltage magnitude, frequency, waveform and symmetry (if multi phase systems are used). According to [5], power quality (PQ) analysis typically includes the following properties of supply voltage: •
voltage dips and interruptions,
•
harmonics and interharmonics,
•
temporary overvoltages,
1. Week before the connection of the PV plant to the grid – from 15th till 22th December 2011, 2. Week after the connection of the PV plant to the grid – from 12th to 19th January 2012. A. Power quality measurement results before the connection of the PV plant In this subsection, results from the power quality measurement that were carried out in the week before connection of the PV plant are presented and briefly commented. Fig. 3 presents summarized measurement results of power quality indices for the first measuring period.
• swell, •
transient overvoltages,
•
voltage fluctuations,
•
voltage unbalance,
•
power frequency variations,
•
DC in AC networks,
•
signaling voltages.
Power Quality can be defined as a degree of deviation from the nominal values of the above parameters. European standard EN 50160 (also adopted in Croatian standard as HRN EN 50160:2008) gives the main characteristics of the voltage characteristics, supplied by the public distribution system, at the customer’s supply-terminals in public low voltage and medium-voltage electricity distribution systems 98
Fig. 3. Summarized power quality indices according to the standard EN 50160:2010
It is obvious that all indices are within the limit values of the European norm EN 50160 [5]. Furthermore, more detailed analyses of measured power quality indices will be presented in the subsection below. Fig. 4 shows average value of voltage RMS at the PCC before PV plant connection. As it can be seen from the Fig. 4, average value of RMS voltage are within the range of +/- 10% of rated voltage (230 V)
and that is in the accordance to EN 50160 [5].
Fig.4. Average value of RMS voltage before PV plant connection
Ideally, an electricity supply should invariably show a perfect sinusoidal voltage waveform at every customer location. However, utilities often find it hard to preserve such desirable conditions. The deviation of the voltage and current waveforms from sinusoidal is described in terms of waveform distortion, i.e. harmonic distortion [6]. A harmonic component in an AC power system is defined as a sinusoidal component of a periodic waveform that has a frequency equal to an integer multiple of the fundamental frequency of the system. Harmonics of the original waveform can be obtained by Fourier analysis. In order to describe harmonic distortion of the signal, the well-known Total Harmonic Distortion (THD) index is used. The THD factor measures the signal distortion and is used to describe the RMS sum of the voltages of all present harmonic frequencies relative to the fundamental frequency [7]. Fig. 5 shows THD of voltage before of the PV plant connection. The measured value of voltage THD is below 8% which is limit stated in the EN50160 [5].
Fig.5. THD of voltage before the PV plant connection
Spectral analysis of the voltage signal in week before PV plant connection is shown in Fig. 6. As can be seen in Fig. 6, the dominant voltage harmonics recorded through the measurement campaign are of the odd order especially the 3rd , 5th, 7th and 9th. The even order harmonics are almost negligible. All harmonic values are
in the accordance with the standard EN 50160 [5].
Fig.6. Harmonic spectrum of the voltage signal before PV plant connection
Voltage fluctuations – flickers are a series of voltage changes or cyclical or random variations in the voltage envelope which are characterized by the frequency of variation and the magnitude [4]. Fig. 7 shows short flicker index Pst which is below the value of 1 in the more than 90% of the week (this the limit stated in the EN 50160[5]).
Fig.7. Short flicker index Pst of supply voltage at 0,4 kV before SEG1 PV power plant connection
B. Power quality measurement results after the connection of the PV plant In this subsection, results from the power quality measurement that were carried out in the week after the connection of the PV plant are presented and briefly commented. Comparing the results from the previous subsection and from this subsection, it can be concluded how PV power plant affect the power quality of the utility voltage. Fig. 8 presents summarized measurement results of power quality indices for the second measuring period. All power quality indices are within the limit values of the European norm EN 50160 [5].
99
11. Again, the dominant voltage harmonics recorded through the measurement campaign are of the odd order especially the 3rd, 5th, 7th and 9th. However, all harmonic values are in the accordance with the standard EN 50160 [5].
Fig. 8. Summarized power quality indices recorded after the PV plant connection
Fig.11. Harmonic spectrum of the voltage signal after the PV plant connection
Fig. 9 shows average value of voltage RMS at the PCC after PV plant connection. Average values of RMS voltage are again within the range of +/- 10% of rated voltage (230 V) and it can be concluded that PV plant doesn’t have negative impact on this power quality index.
Fig. 12 shows short flicker index Pst which is again below the value of 1 in the more than 90% of the week. It is interesting to notice that during all measurement period (not just 90%) the value of short duration flicker index Pst is below 1.
Fig.9. Average value of RMS voltage after PV plant connection
Fig.12. Short flicker index Pst of supply voltage at 0.4 kV after the SEG1 PV power plant connection
Fig. 10 shows THD of voltage after the PV plant connection. The value of THD index is slightly higher than in the case before PV plant connection but it is still below the limit value of 8 %. It can be concluded that PV plant has small impact on harmonic distortion of the utility voltage. More detail analysis of harmonic influence of PV power plants can be found in [8].
Fig.10. THD of voltage after the PV plant connection
Spectral analysis of the voltage signal in week after the PV plant connection is shown in Fig. 100
IV. Short circuit analysis of pv power plant seg
1
and its impact on the
existing distribution network
PV power plant SEG 1 has rated power of 30 kW. According to Croatian grid code [9] when rated power of power plant is low (in comparison to short circuit power of grid at the point of connection) it is enough to examine whether the ratio between short circuit power of the grid and rated power of the power plant is higher than 150. If this condition is satisfied then power plant can be connected to the public grid. In order to calculate short circuit power of the distribution grid at the PCC the software model created in DigSILENT PowerFactory is used [10] (see Fig. 13). Calculated short circuit power is 13.28 MVA thus the ratio is 13.28/0.03 = 442 which is higher than 150.
Croatia,” SIP 2011 [8.] K. Fekete, Z. Klaic and Lj. Majdandzic, „Expansion of the residential photovoltaic systems and its harmonic impact on the distribution grid,” Renewable Energy Journal, vol. 43, July 2012, pp. 140-148. [9.] Croatian Electric Utility HEP, Grid code (Internal technical standard), Available online: http://ops. hep.hr/ops/odbor/odbor.aspx [in Croatian] Fig.13. Model of distribution network and connection of SEG1 PV power plant
[10.] DigSILENT PowerFactory, Official web page: http://www.digsilent.de/
V. Conclusion Impact on the distribution grid of the PV power plant SEG 1 has been analyzed through power quality indices according to the EN 50160 [5]. VI. Acknowledgment The authors gratefully acknowledge the contributions of Ian X. Austan, Ann Burgmeyer, C.J. Essel, and S. H. Gold.
References
[1.] Solvis photovoltaic modules SV60-200 12YRG. Technical specification. Available online: http:// www.solvis.hr/wp-content/uploads/2011/12/ SOLVIS-DS-SV60-12YRG-20111206.pdf [2.] SMA power inverter Sunny Tripower 15000 TL. Technical specification. Available online: http:// www.sma.de/en/products/solar-inverter-withouttransformer/sunny-tripower-15000tl-20000tleconomic-excellence.html#Overview-8582 [3.] Solvis, Technical description os PV power plant SEG 1, in Croatian , 2011. [4.] S. Nikolovski, Z. Klaic and Z. Novinc, „Statistical analysis of measured power quality indices in distribution networks,” in Proceedings of the 17th International Wroclav Symposium and Exhibition on Electromagnetic Compatibility, Wroclav, 29 June – 1 July, 2004 [5.] EN 50160:2010. Voltage characteristic of electricity supplied by public distribution systems. [6.] F.C. De La Rosa, Harmonics and Power Systems, CRC Press Taylor & Francis Group, Boca Raton, 2006. [7.] S. Nikolovski, P. Maric and Z. Klaic, „Some experience with biogas and biomass power plants in operation on distribution network in Eastern
101
REMOTE CONTROL OF ANTHROPOMORPHIC ROBOTIC PLATFORM FOR SOCIALLY ACCEPTABLE AND ADEQUATE INTERACTION IN HUMAN’S WORKING ENVIRONMENT Simon János1, István Matijevics2 1 Subotica Tech, Department of Informatics e-mail: simon@vts.su.ac.rs, 2 University of Szeged, Department of Informatics e-mail: mistvan@inf.u-szeged.hu
Abstract: A mobile service robot is an agent with the ability to move around in a network. Mobile agents have become a very popular research topic lately. Many technologies facilitate moving objects between hosts. In this work, we consider a remote-learning interface, and evaluate the contribution of different interface components to the overall performance and learning ability of end users. The evaluated components are the control method of the robotic arm and the use of a three-dimensional simulation tool before and during the execution of a robotic-task. Keywords: distant monitoring, personal robot, WIFI I. Introduction This research is based on the analytical and experimental results, the authors conclude that humanoid robotics is a challenging and attractive technical and scientific field for robotics research. The real utility of humanoid robots has still to be demonstrated, but personal assistance can be mainstream technology as a promising application domain. Personal robotics also presents difficult technical problems, especially related to the need for achieving adequate safety, proper human–robot interaction, useful performance, and affordable cost [1]. When these problems are solved, personal robots will have an excellent chance for significant application opportunities, especially if integrated into future home automation systems, and if supported by the availability of humanoid robots. The MINI2440 Development Board is based on the Samsung S3C2440 microprocessor and this can be a promising development platform for teleoperation of anthropomorphic robotic platform in human’s working environment. Its PCB is 4- layer boarded, equipped with professional equal length wiring which ensures signal integrity. MINI2440 boards are manufactured in mass production and released with strict quality control. On startup it directly boots preinstalled Linux by default. There are no extra setup steps or configuring procedures to start the system. It is easy for users to get started.
Anyone with very basic knowledge about the C language can become an application developer. FriendlyARM mini 2440 comes with 400MHz Samsung ARM9 processor. The board measures 100 x 100 mm, ideal for learning about ARM9 systems [13]. On board 1024MB SDRAM and NAND Flash,2M NOR flash with preinstalled BIOS, 100M Ethernet RJ-45 port (powered by the DM9000 network chip), The MINI2440 development board currently supports Linux 2.6.29 featured Qtopia, WinCE.NET 5.0, WinCE.NET 6.0 and Android 2.1.
Figure 1. Mini2440 development board
The Mini2440 is low-cost and practical ARM9 development board, it is currently the most cost-effective learning Board. Running linux on embedded systems 103
Qtopia 2.2.0 is developed by Qt based on Qt/Embedded 2.3 graphic interface. After Qtopia 2.2.0, Qt hasn’t released any new PDA versioned graphic interface. The latest Qtopia is for cell phones [2]. But it is still developing Qt/ Embedded libraries. For most of our released systems, we have installed Linux with Qtopia 2.2.0 by default. It has various useful utilities. When you get our system, just power it on and you will be able to experience its utilities. This system supports both a USB mouse and a touch screen simultaneously. It supports a USB mouse and keyboard plug and play [6,12].
II. Web server Boa is a single-tasking HTTP server and runs on Qtopia. That means that unlike traditional web servers, it does not fork for each incoming connection, nor does it fork many copies of itself to handle multiple connections. It internally multiplexes all of the ongoing HTTP connections, and forks only for CGI programs (which must be separate processes), automatic directory generation, and automatic file g-unzipping [11]. The developed GUI was implemented with Boa. So as to use the GUI, the user only needs to connect the personal robot using a Web browser. The Web browser connected to the robot will load the site automatically.
Figure 3. Remote robot control
Figure 2. Qtopia running on mini2440
A user can control the IPR (Internet-based Personal Robot) at the remote site through internet using a simulator provided at the local site. The user regards the status of the virtual IPR at the local site as that of the real IPR at the remote site. Since the user cannot recognize the environment of the remote site, it is expected that the real IPR moves as the virtual IPR does [3]. However, because of time delay, compensation for the path error and the time difference between the real IPR and the virtual IPR has to be done, which increase as time goes on.
104
The GUI loaded in the Web browser is shown in Figure 3. The GUI is linked with the robot using TCP/IP. Only the first user connected to the robot can use the control modes of the GUI, and other users can only monitor the status of the robot. In the center of the GUI, the map and the status of the robot are displayed. In the bottom of the GUI, there are four buttons for the selection of the control modes, and several buttons for generating the commands [12]. On the right side of the GUI, the controls for the motion of the CCD camera mounted on the top of the robot and for editing the job sequence exists. The order and the status of connection can be verified using the right bottom of the GUI. III. Wireless control In this project we have used Wi-Fi modules to achieve remote control over an anthropomorphic
robotic platform. For this project we have used 2 USB Wi-Fi modules from the kit as shown on figure 4.
The number of internet nodes was 9 between Szeged and Subotica, and the approximate round trip time via internet was about 32ms. V. Direct control The user conducted a direct control operation using the direct control mode of the GUI loaded in a Web browser. Figure 6 shows the experimental result [4]. The point S was the initial position of Boe-bot, and two obstacles were put on the map.
Figure 4. Graphical Scheme of Remote robot control
The Hardware basically centers around Wi-Fi modules and actuators controlled by mini 2440. The mini 2440 base station will send data to Wi-Fi on the mobile measuring station which will drive the mini 2440 to general purpose IO pins [10]. The microcontroller will drive the Motors which will run the anthropomorphic robotic platform. IV. Experiment Experiments were performed with Boe-bot in the real internet environment. The remote site was Szeged, Hungary and the local site was Subotica, Serbia as shown in Figure 5. Boebot was connected to internet by using Wi-Fi at the remote site, and the notebook computer (Intel Atom 1.7GHz, 2GB Ram) for a user was connected to internet by using LAN at the local site [9].
Figure 6. Direct control mode
Then result shows that Boe-bot followed the virtual robot path well in spite of the two obstacles. VI. Supervisory control In this experiment, the user indicated only a goal position in the map of the GUI, and then Boe-bot generated a moving path and came to the position autonomously. The result is shown in Figure 6. After turning on the corner of the corridor, small path error could be found [7]. This can be considered as the error of the localization of the robot pose. Experimental results, where the dashed line is the path of the virtual robot in the local site, and the solid line is the path of Boe-bot.
Figure 5. Local site and remote site
105
strength and the direct line of sight has to be free of obstacles [6]. So the navigation system is key factor. Since human-robot interaction is leaded by oral communication, user speech has to be well understood by robot skill. Thereby, a strong and efficient speech recognition system is mandatory.
Figure 7. Navigation map
The scenario for the experiment was as follows: Boe-bot was to move from A toward to F in front of an obstacle. The user stopped the robot on the way, since he wanted to see some pictures on the wall [8]. He could see the pictures using the camera control panel on the GUI. After that, Boe-bot moved to the point F and said “Hello!” to the person who sat on the point F. After saying that, Boe-bot showed a message, “ACK!” After the person put it on the top of the robot and touched the OK switch, Boe-bot moved toward the point A. Boe-bot said “Here is a message” and showed a message. After someone at point C copied the paper and put them on its top, Boe-bot moved toward the final position A [5]. The experiment with this scenario was successful as shown in Figure 7. The experiment demonstrated the applicability of Boe-bot. VII. Testing the system In order to test the smooth running of the system, we have developed in our lab several experiments as if it were at a real environment. Our appliance was a laptop screen and we have taught different commands to the robot just facing the remote control to the built-in infrared receiver and pressing the corresponding buttons. At this point, the remote device name, the timings and the coding commands themselves are written to the data base that will be used by the server. Then, we link each command with a grammar rule in the speech recognition system. The operations learnt by the robot are ”turn left”, ”turn on right”, ”move up”, ”move down”. After lot of tests, it was observed that the system is running properly but with some key points. Robot pose is a very relevant factor because it is necessary that the WI-Fi signal 106
VIII. Conclusion An experiment is designed and executed, comparing alternative interface designs for remote-learning of robotic-operation. The results provide guidelines for a better design of an interface for remote- learning of roboticoperation. The main contribution of this paper is in the introduction of a new teaching tool for laboratories and the supplied guidelines for an efficient design of such tools. Anthropomorphic robotic platform for socially acceptable and adequate interaction in human’s working environment is a highly challenging area that requires interdisciplinary collaboration between AI researchers, computer scientists, engineers, psychologists and others, where new methods and methodologies need to be created in order to develop, study and evaluate interactions with a social robot. While it promises to result in social robots that can behave adequately in a human-inhabited (social) environment, it also raises many fundamental issues on the nature of social intelligence in humans and robots. IX. Acknowledgments This work was funded by the Provincial Secretariat for Science and Technological Development of Autonomous Province of Vojvodina, Republic of Serbia, under contract 114-451-2116/2011.
References
[1.] A. Nafarieh and J. How. “A Testbed for Localizing Wireless LAN Devices Using Received Signal Strength,” Communication Networks and Services Research Con- ference, Halifax, 2008, pp. 481-487. [2.] Erin-Ee-Lin Lau, Boon-Giin Lee, Seung-Chul Lee, and Wan-Young Chung, “Enhanced RSSIBased High Accuracy Real-Time User Location Tracking System for Indoor and Outdoor
Environments,” International Journal on Smart Sensing and Intelligent Systems, Vol. 1, No. 2, June 2008. [3.] Gogolak Laslo, Dr Pletl Szilveszter, Gál Péter, Dukai Zoltán “Observing inland waters with ultrasonic distance measuring by the aid of wireless sensor networks” XXVIII. International Scientific Colloquium “Science In Practice”, Serbia, 2010.
[11.] Stefano Tennina, Marco Di Renzo, Fabio Graziosi and Fortunato Santucci, “Locating ZigBee Nodes Using the TI’s CC2431 Location Engine: A Testbed Platform and New Solutions for Positioning Estimation of WSNs in Dynamic Indoor Environments,” in Proc. of the First ACM International Workshop on Mobile Entity Localization and Tracking in GPS-less Environments (MELT 2008), Sep. 2008.
[4.] Gyula Mester, „Motion Control of Wheeled Mobile Robots”. Proceedings of the 4th International Symposium on Intelligent Systems, SISY 2006, pp. 119-130, Subotica, Serbia, 2006.
[12.] Youngjune Gwon, Ravi Jain, and Toshiro Kawahara, “Robust Indoor Location Estimation of Stationary and Mobile Users,” in Proc. of IEEE INFOCOM, March 2004.
[5.] Gyula Mester, „Sensor Based Control of Autonomous Wheeled Mobile Robots”, The Ipsi BgD Transactions on Internet Research, TIR, Volume 6, Number 2, pp. 29-34, ISSN 18204503, (IF: 0.7), New York, Frankfurt, Tokio, Belgrade, 2010.
[13.] Simon János, Matijevics István, “Simulation and Implementation of Mobile Measuring Robot Navigation Algorithms in Controlled Microclimatic Environment Using WSN”, Proceedings of the Conference SISY 2011, pp. 1-6, Subotica, Serbia, 2011.
[6.] Gyula Mester, Intelligent Mobile Robot Controller Design, Proceedings of the Intelligent Engineering Systems, INES 2006, pp. 282-286, ISBN: 0-7803-9708-8, London, United Kingdom, 2006. [7.] Gyula Mester, Intelligent Mobile Robot Motion Control in Unstructured Environments, Acta Polytechnica Hungarica, Journal of Applied Sciences, Vol. 7, Issue No. 4, pp. 153-165, Budapest, Hungary, 2010. [8.] K. Yu and Y. J. Guo, “Non-Line-of-Sight Detection Based on TOA and Signal Strength,” Personal, Indoor and Mobile Radio Communications, Cannes, 2008, pp. 1-5. [9.] Masashi Sugano, Tomonori Kawazoe, Yoshikazu Ohta, and Masayuki Murata, “Indoor Localization System Using RSSI Measurement of Wireless Sensor Network Based on ZigBee Standard,” in Proc. of Wireless Sensor Networks 2006 (WSN 2006), July 2006. [10.] Simon János, Goran Martinović, Matijevics István, “WSN Implementation in the Greenhouse Environment Using Mobile Measuring Station” International Journal of Electrical and Computer Engineering Systems pp. 1-10, Osijek, Croatia, 2010
107
COMPARISON OF GENERAL PURPOSE GRAPHIC PROCESSOR UNITS AS A SUBSTITUTION FOR TRADITIONAL PROCESSORS Tomislav Matić, Milijana Žulj and Željko Hocenski Faculty of Electrical Engineering Osijek J. J. Strossmayer University of Osijek Kneza Trpimira 2b, 31000 Osijek, Croatia email: tomislav.matic1@etfos.hr, milijana.zulj@etfos.hr, zeljko.hocenski@etfos.hr
Abstract: This paper describes two leading general purpose GPU (Graphics Processing Unit) computing technologies, ATI Stream and NVIDIA CUDA (Compute Unified Device Architecture) platforms which use GPU as parallel compute device. Programming tools, CUDA C and OpenCL (Open computer Language) allow to demonstrate the usage of the GPU as a parallel computing device on a simple vector multiplication algorithm. Results of several experiments, executed by NVIDIA Geforce 9800 GT and ATI Radeon HD 4650 GPU, show that NVIDIA gives better results. It is necessary to take into account that GPGPU is still new and evolving concept, as well as the fact that looking at specifications and price of both graphics cards, Geforce 9800 GT GPU is better than ATI Radeon HD 4650, in terms of GPGPU computing. GPGPU graphics cards are important in scientific calculations, in computationally – intensive applications, because this kind of applications can use all of their parallel power and acceleration. Keywords: GPGPU, CUDA, ATI Stream, CPU, GPU, OpenCL. I. Introduction In the past few years GPU has evolved into a highly parallel, multithreaded, many core processor with tremendous computational horsepower and very high memory bandwidth. GPU is specialized for compute-intensive, highly parallel computation and therefore designed such that more transistors are devoted to data processing than the data caching and flow control [1]. The introduction of NVIDIA’s CUDA technology ushered a new era of improved performance for many applications as programming GPUs became simpler. Users needn’t map programs into graphics APIs, so program development becomes more flexible and efficient. GPU is suited to address problems that can be expressed as data-parallel computations with high arithmetic intensity (the ratio of arithmetic operations to memory operations) [2]. Because the same program is executed for each data element, there is a lower requirement for sophisticated flow control. For these reasons
many problems can be computed in the GPU: fast wavelet transform [3], fast DFT [4], sorting algorithms of large lists [5] and other examples in image and media processing applications, general signal processing, physics simulation, etc. CUDA is a proprietary API and set of language extensions that works only on NVIDIA’s GPUs. Khronos Group developed an open standard for parallel programming called OpenCL that can be used for CPUs (Central Processing Units), GPUs, DSPs (Digital Signal Processors) and other types of processors [6]. CUDA and OpenCL both present similar features but through different programming interfaces. Both CUDA and OpenCL use special functions called kernels that are executed in parallel on the GPGPU device, but OpenCL is capable of targeting very dissimilar parallel processing devices. On the other hand CUDA is developed by the NVIDIA company and it can only be used on the NVIDIA GPUs. Based on the above stated it is logical to compare the performance of OpenCL and CUDA 109
environments on NVIDIA and ATI GPUs. For performance analysis we created simple OpenCL and CUDA kernels. Kernels were executed on NVIDIA Geforce 9800 GT and ATI Radeon HD 4650 GPUs and time measurements of memory transfer and kernel execution for both programming environments were taken. Rest of the paper is organized as follows. Section 2 covers NVIDIA CUDA and ATI Stream parallel architecture, its organization and specifications. Section 3 describes OpenCL and CUDA programming model. Section 4 gives detailed explanation of the testing environment. Experimental results are analyzed in section 5, and section 6 summarizes the work and concludes the paper. II. Nvidia
Figure 1. Simplified NVIDIA CUDA 1.1 GPU architecture
• A parallel data cache or shared memory that is shared by all scalar processor cores, • A read-only constant cache that is shared by all scalar
stream
memory space, which is a read-only region of device memory,
This section covers GPGPU parallel architecture of the above mentioned platforms.
• A read-only texture cache that is shared by all scalar processor cores and speeds up reads from the texture memory space, which is a read-only region of device memory; each multiprocessor accesses the texture cache via a texture unit that implements the various addressing modes and data filtering.
cuda
and
ati
architecture
A. CUDA Architecture The CUDA architecture is built around a scalable array of multithreaded Streaming Multiprocessors (SMs). A multiprocessor consists of eight Scalar Processor (SP) cores for arithmetic operations, two special function units for transcendentals, a multithreaded instruction unit, and on-chip shared memory Fig. 1. Each multiprocessor has on-chip memory of the four following types c. f. Fig. 1.: • One set of local 32-bit registers per processor,
For communication with the host memory SMs use device global memory. The data from the host memory cannot be directly loaded to SMs memory space. To access data from the host memory, SMs load data from device global memory to shared memory, constant or texture cache [7]. B. Stream Architecture ATI Radeon 4000 series GPU (also known as the R700 family) implements a parallel microarchitecture platform ATI Stream for general-purpose GPU applications. R700family GPU includes a data-parallel processor (DPP) array, a command processor, a memory controller, and on-chip memory c. f. Fig. 2. The command processor reads commands that
110
the host has written to memory-mapped GPU registers in the system-memory address space and sends hardware-generated interrupts to the host when the command is completed.
commands. All of the shader operations are done with 32-bit precision. Shader units in a SIMD engine can share data using low- latency LDS memory and can speed up data flow using texture cache of the texture unit [8]. C. Architecture similarities From Fig. 1. and 2. connections between the two architectures can be made. In Stream DPP array is divided into SIMD engines and in CUDA device is divided into streaming processors. Every SM can have eight SPs and every SIMD engine can have eight or sixteen shader units. SP can share data using shared memory and shader units can share data using LDS. Registers, Texture cache, constant cache and global device memory can be found in both architectures.
Figure 2. Simplified ATI Stream R700 architecture
Memory controller has direct access to all of GPUs local memory and the host-specified areas of system memory. To satisfy read and write requests, the memory controller performs the functions of a direct-memory access (DMA) controller, including computing memoryaddress offsets based on the format of the requested data in memory. DPP array is organized into SIMD (Single Instruction Multiple Data) engines. Every SIMD engine has eight or sixteen shader units, local 16 kB data store (LDS) and texture unit with its own dedicated L1 texture cache. Each shader unit is made up of five stream cores (SC), branch unit and general purpose registers (GPRs). SCs have different levels of functionality. Four out of five SCs can handle floating point multiply-add, floating point multiply, floating point add, integer add, dot product instructions per clock cycle. The fifth SC in each shader unit cannot compute dot products, integer add commands or double precision, but can handle instructions that include integer multiply and division, bit shifting, and transcendental
III. Opencl - cuda programming model To utilize the power of GPGPU architectures programmer defines functions, called kernels. Kernels can be executed by one or more GPGPU devices and are called from the host. Kernels are executed N times in parallel by N different CUDA threads or work-items in OpenCL. Each thread, work-item executes the same kernel code but with the specific data and/or pathway depending on their ID. Work-items (threads) are organized into work-groups (blocks) with a unique ID. A single work-item (thread) can be uniquely identified by a combination of its local ID and work-group (block) ID. Blocks (workgroups) are organized in a grid (NDRange). Threads, blocks, and a grid (work-items, workgroups, NDRange) can be three dimensional and each thread (work-item) can be uniquely identified by its dimensions Fig. 3 [9]. Threads within a block (work-items within a work-group) can cooperate among themselves by sharing data through shared memory (LDS) and synchronizing their execution. to
111
2.0 16x bus connection with the host. Threads are divided into 32 size wraps, and block can have a maximum of 512 threads. SM has 16 kB of shared memory and 8 kB of registers [10].
CUDA Term/OpenCL Term B/WG – Block/Work-group T/WI – Thread/Work-item Figure 3. CUDA/OpenCL programming model coordinate memory accesses. Only threads within a block (work-items within a work-group) can be synchronized there is no mechanism for synchronization between blocks (workgroups).
GPU executes threads (work-items) in groups of 32 consecutive ID parallel threads (work-items) called wraps (wavefront). Work-items (threads) in a wavefront (wraps) are executed on the same SIMD engine (SM) to hide memory latencies.
ATI Radeon HD 4650 GPU has eight SIMD engines (eight shader units per SIMD engine), 320 stream cores, 16,0 GB/s memory bandwidth, 512 MB of global device memory and PCI-E 2.0 16x bus connection with the host. Workitems are divided into 32 size wavefronts, and work-group can have a maximum of 128 workitems. SIMD engine has 16 kB of LDS and 2 kB of GPRs [11]. Specification comparison of the two GPUs are shown in table 1. TABLE I. SPECIFICATION COMPARISON NVIDIA AND ATI GPUs
Maximum number of threads/work-items in a wraps/wavefront depends on the series and the model of the GPU. Also there are limitations for the grid/NDRange size and block/workgroup size [1, 8]. IV. Testing For testing purposes two kernels were made: one kernel in CUDA environment, and the other in OpenCL environment. Kernel is a simple vector multiplication with the arbitrary size of the vectors. Testing of the algorithm was done on a desktop computer with installed Intel Core 2 Quad Q6600 processor, 2 GB of RAM and Windows 7 Professional.
a)
CUDA GPU used is the NVIDIA GeForce 9800GT and Stream GPU is the ATI Radeon HD 4650. 9800GT GPU (CUDA 1.1 compute capability) has 14 SMs (eight SPs per streaming multiprocessor), 57,6 GB/s memory bandwidth, 512 MB of global device memory and PCI-E 112
b)
company that develops the GPU on which it executes.
c)
d) Figure 4. Experimental results: a) device global memory allocationtime,b) data copying time from the host memory to the device global memory, c) kernel execution time, d) data copying time from the device global memory to the host memory.
V. Experimental results Kernel written in CUDA environment was tested only on the NVIDIA 9800GT GPU. Kernel written in OpenCL environment was tested on both GPUs, NVIDIA and ATI. Four time measurements were made: device global memory allocation time, data copying time from the host memory to the device global memory, kernel execution time and data copying time from the device global memory to the host memory. Results of the measurements are the average of 100 executions. Nine input vector sizes were used N=(512, 16384, 65536, 262144, 1048576, 2097152, 4194304, 8388608, 33554432). Input vectors and the resulting vector are floating point data type and the kernels were executed with 256 block/workgroup size. Time results depicted in Fig. 4 from a) to b) show that NVIDIA GPU with CUDA environment gives the best results. For both environments 9800GT GPU has almost the same time result with a very small difference in favor of CUDA. This can be explained by the fact that CUDA is developed by the same
Memory allocation time is very similar for both GPUs c. f. Fig. 4. a). NVIDIA GPU gives better performance in data copying experiment for large amounts of data c. f. Fig 4. b). This is not noticeable in Fig. 4. d) because only one vector is copied back to the host. Although ATI GPU has a greater number of SCs than the NVIDA GPU, number of shader units is lower than the number of SPs. Because of this fact and other architectural differences (register count, pipeline execution) kernel execution is slower on the HD 4650 Fig. 4. c). VI. Conclusion In this paper we have researched the topic of GPGPU computing. Two specific GPUs (NVIDIA 9800GT and ATI HD 4650) and two different programming environments (CUDA and OpenCL) were compared. We showed the differences and similarities in the two parallel architectures ATI Stream and NVIDIA CUDA. Programming models for both platforms are almost the same with differences in specific terms (thread/work-item, block/work-group, grid/NDRange). Two kernels were executed on both GPUs and four time measurements were made and compared. Because OpenCL is capable of targeting very dissimilar parallel processing devices it was used for both NVIDIA and ATI GPU. 9800GT GPU with CUDA environment gave the best result in all experiments. ATI GPU is slower because of the architectural differences of the two GPUs in favour of the NVIDIA GPU. OpenCL implementation on 9800GT GPU gave slower results because OpenCL is a multiplatform environment and CUDA is a hardware specific platform (developed for the NVIDIA GPUs). This paper didn’t include experiments that use the specificplatform properties (shared memory/ LDS, constant cache, texture units). These experiments will be developed and included in future work. 113
References
[10.] NVIDIA Corporation, “Specifications NVIDIA GeForce 9800GT,” http://www.geforce.com/hardware/desktop-gpus/ geforce-9800gt/specifications, October 2012.
[2.] M. Garland, et. al., “Parallel Computing Experiences With Cuda,” IEEE Micro, vol 28, pp. 13–27, August 2008.
[11.] Advanced Micro Devices, “ATI Radeon HD 4600 Series GPU Specifications,”, http:// www.amd.com/us/products/desktop/graphics/atiradeon-hd-4000/hd-4600/Pages/ati-radeon-hd4600-specifications.aspx, October 2012.
[1.] NVIDIA Corporation, “NVIDIA CUDA C Programming Guide Version 4.2,” NVIDIA Corporation 2012.
[3.] J. Franco, G. Bernabe, J. Fernandez, M. E. Acacio, “A Parallel Implementation of the 2D Wavelet Transform Using CUDA,” 17th Euromicro International Conference on Parallel, Distributed and Network-based Processing, pp 111–118, February 2009. [4.] N. K. Govindaraju, B. Lloyd, Y. Dotsenko, B. Smith, J. Manferdelli, “High performance discrete Fourier transforms on graphics processors,” in Proc. of the 2008 ACM/IEEE conference on Supercomputing, pp. 1–12, November 2008. [5.] E. Sintorn and U. Assarsson, “Fast parallel GPUsorting using a hybrid algorithm,” in Journal of Parallel and Distributed Computing, vol. 68, V. Prasanna Ed. Academic Press, 2008, pp. 1381– 1388. [6.] Khronos OpenCL Working Group, “The OpenCL Specification” The Khronos Group Inc 2008. [7.] T. Matić and Ž. Hocenski, “Parallel Processing with CUDA in Ceramic Tiles Classification,” in Proc. 14th international conference on Knowledge-based and intelligent information and engineering systems KES’10, pp. 300-310, Semptember 2010. [8.] Advanced Micro Devices, “Reference Guide R700-Family Instruction Set Architecture,” Advanced Micro Devices, Inc. 2011. [9.] P. Du, R. Weber, P. Luszczek, S. Tomov, G. Peterson, J. Dongarra, “From CUDA to OpenCL: Towards a performance-portable solution for multi-platform GPU programming,” in Parallel Computing, vol 38, J. Hollingsworth Ed. Elsevier, 2012, pp. 391–407.
114
EPITAXIÁLISAN NÖVESZTETT RÉTEGEK ECV MÉRÉSSEL MÉRT KONCENTRÁCIÓ PROFIL KORREKCIÓJA Ürmös Antal, Nemcsics Ákos Mikroelektronikai és Technológiai Intézet Óbudai Egyetem Budapest, Magyarország e-mail: urmos.antal @ phd.uni-obuda.hu, nemcsics.akos@kvk.uni-obuda.hu
Abstract: Jelen cikkünkben a III-V típusú epitaxiálisan növesztett vegyületfélvezető rétegek koncentrációprofilmérésének egy problémájával és annak kiküszöbölésével foglalkozunk. Ezeknek az anyagoknak - különleges tulajdonságaik miatt - nagy jelentőségük van a félvezetőeszközök készítésben, ahol az epitaxiális rétegnövesztés egy alapvető technológiai lépés. E folyamat során szükség van a növesztett réteg adalékkoncentrációjának és összetételének a megváltoztatására. A koncentrációprofil mérésére - többek között - az elektrolitos kapacitás-feszültség (ECV) technika szolgál, mely a hagyományos kapacitás-feszültség (CV) mérést kombinálja az anódos oxidációval. E módszer alkalmazásával elkerülhetjük – egy bizonyos térerősség felett - az elektromos letörést. A módszer hátránya, hogy a kontaktus mérete - az oldódás alatt a mérés folyamán növekszik. Ez a felületnövekedés egyes rétegszerkezetek esetén mérési hibát okoz. E munkában, felvázoljuk e probléma megoldását, egy általunk fejlesztett Matlab™ algoritmus segítségével. Ez a program kiszámítja minden lépésben a fal összkapacitását (a palástkapacitást) és kivonja a mért adatokból a fal kapacitását. A szoftver működését több minta adatsoron demonstráljuk. Keywords: kocentráció profil korrekció, ECV mérés, III-V vegyületfélvezető, anódos oxidáció I. Bevezetés A félvezető eszközökkel szemben támasztott újfajta elvárások és követelmények, felkeltették a kutatók és mérnökök érdeklődését a III-V típusú vegyületfélvezetők iránt. Ennek az oka, hogy több kedvező tulajdonságuk (pl. direkt sávszerkezet, töltéshordozók nagyobb mozgékonysága) lehetőséget ad arra, hogy olyan eszközöket készítsünk belőlük, melyek szílíciumból készíthetőek, esetleg csak rosszabb paraméterekkel lennének készíthetőek. A III-V-típusú anyagok technológiájában az egyik legfontosabb lépés az epitaxiális rétegnövesztés. A szilícium-alapú eszköztechnológiában a különböző adalékolású rétegeket legtöbbször diffúziós eljárással alakítják ki. Ez az eljárás a vegyületfélvezetők esetében nem járható út a hőbomlás miatt. E folyamat során félvezetőből az illékony komponens elpárolog (ami például a GaAs esetében az arzén). Az epitaxiás rétegnövesztés során lehetőség van a növesztendő réteg adalékkoncentrációjának és összetételének a megváltoztatására is. Így egy előre megtervezett
koncentráció-profil hozható létre, sőt az anyagok heteroátmenetei is előállíthatóak. A különféle félvezető technológiákban igen fontos szerepük van a különféle minősítő méréseknek. Ezek a vizsgálatok lehetőséget adnak például arra, hogy meggyőződjünk arról, hogy sikerült-e megvalósítani a kívánt adalékeloszlású réteget. Erre a célra leggyakrabban a CV mérést alkalmazzuk. E módszer hátránya, hogy a kritikus térerősség elérése után fellép az átmenet elektromos letörése. Ennek a hatásnak kiküszöbölése ECV méréssel lehetséges. A módszer lényege, hogy a félvezetőt elektrolittal hozzuk kontaktusba, ami így kvázi Schottky diódaként működik [1]. A réteg eltávolítást a félvezető anódos oldásával érhetjük el. Azonban itt is fellép egy probléma, nevezetesen az hogy a lemart tartomány falának kapacitása meghamisítja a mérési eredményeket. Jelen cikkünkben, egy általunk fejlesztett Matlab™ program segítségével küszöböljük ki a problémát. A nevezett algoritmus kiszámítja minden lépésben a fal összkapacitását és kivonja 115
a mért adatokból. A szoftver működését egy 100, egy 1000 és egy 10000 mérésből álló minta adatsoron demonstráljuk. II. A mérés menete A növesztett epitiaxiás réteg minősítése a technológiai visszacsatolás miatt szükséges. Az egyik legfontosabb paraméter az adalékkoncentráció-profil. A mérés segítségével meggyőződhetünk arról, hogy sikerült-e megvalósítani a kívánt adalékeloszlású réteget. Erre célra leggyakrabban a CV mérést használjuk. A szabad töltéshordozók mélységi eloszlása befolyásolja az átmenet kapacitását, ami szorosan összefügg az adalékprofillal. A mérések mélységét az előfeszítéssel változtatjuk, melyet az átmenet elektromos letörése határol be, mely a kritikus térerősség elérésekor lép fel. Ezt korábban úgy próbálták kikerülni, hogy a mérés után a mért réteget lemarták, majd újra megmérték a kapacitást. Ennél a módszernél, a mért eredmények nehezen és pontatlanul illeszthetők egymáshoz, valamint a munka nagyon időigényes. 1973-ban Ambridge és munkatársai egy másik módszert javasoltak. Itt a félvezetőt elektrolittal hozták kontaktusba, ami így kvázi Schottky diódaként működik. Ez az ún. ECV mérés lényege. Előnye a hagyományos CV módszerhez képest, hogy a maximális mélységet nem korlátozza a letörési feszültség. A méréseinket egy, a Polaron cég által gyártott PN4100 Semiconductor Profile Plotterrel végeztük el [2]. Ezzel a berendezéssel, lehetőség nyílik a III/V típusú vegyület-félvezetők (például GaAs) töltéshordozók adalékkoncentráció profiljainak felvételére (1. ábra).
1. ábra. Polaron PN 4100 Semiconductor Profile Plotter.
116
A méréshez híg vizes elektrolitot pl. Tiront vagy NaOH:EDTA oldatot használunk. A mérőcella a 2. ábrán látható. Az elektrolit kontaktust egy gumigyűrű segítségével hozzuk létre, melynek átmérője 3mm. A vizsgálat két részből áll. Előszőr megmérjük a kapacitástfeszültség karakterisztikát, az előfeszítésre szuperponált 30 Hz-es és 3 KHz-es AC jel segítségével. Ezt követően anódikus oxidáció segítségével eltávolítunk egy réteget az anyagból. A koncentrációprofilt e két lépés egymás követő ciklikus ismétlésével vesszük fel. A marást fotogerjeszés segítségével idézzük elő. A fotoáram arányos a félvezető átmenetben lévő lyukak számával. Ez az áram “p” típusú anyag esetén spontán, míg “n” típusú anyag esetén a megvilágítás hatására indul meg.
2. ábra. A koncentráció mérésre alkalmas mérőcella képe
Itt érdemes megjegyezni, hogy az áram nagyobbik része a vegyértéki sávon keresztül valósul meg [1]. A töltéshordozó koncentrációt, a Schottky átmenet belső szélénél a kapacitás-feszültség profilból számíthatjuk ki. Itt megjelenik az a probléma, hogy a lemart tartomány falának kapacitása hozzáadódik a mérési eredményhez. III. Diszkusszó A szimulációt 0.01 μm-es lépésekben végezzük, a lépésköz a képzeletben lemart réteg vastagsága. A mért adatok (C’) az alaplap (C) és a palást (Cp) kapacitását egyaránt tartalmazzák. A diszkrét lépésekben végzett korrekció alapelve a következő: A 0. lépésben nincs falkapacitás. A többi lépésnél előszőr kiszámoljuk a kiürített réteg vastagságát. Ezt követően meghatározzuk a
falkapacitás növekményt. Ezek a falkapacitások összeadódnak a mérések során, amelyet palástkapacitásnak nevezünk. Ezt levonjuk az aktuális mért kapacitásból. Végül kiszámoljuk a töltéshordozó koncentrációt. Minél finomabb felbontásban (d) végezzük a lépéseket, a korrekció annál pontosabb lesz. A 3. ábrán a korrigált kapacitást kék színnel jelöltük.
A legelső lépés az adatok beolvasása és a kapacitáseloszlás kiszámítása a adalékatomok koncetrációeloszlásából (1), ahol a Cmért(x) a adalékatomok kapacitáseloszlása, az N(x) az adalékatomok eloszlása, a q az egy elektron töltése, az ε0 vákuum permittivitása (ε0 = 8.9*10-12 F/m), az εr a relatív permittivitás (εr =13 GaAs esetén), a V az átmenet előfeszítése, míg az A az átmenet felülete, x az aktuális lépés száma : (1)
Ahogy az említettük, a legelső kompenzált érték megegyezik a legelső mért értékkel, hiszen ekkor még a falkapacitás értéke nulla. A többi esetben, először kiszámítjuk a kiürített réteg vastagságát (w(x)) (2), ahol a Cmért(x) a adalékatomok kapacitáseloszlása, az ε0 vákuum permittivitása, az εr a relatív permittivitás, míg az A az átmenet felülete, x az aktuális lépés száma: (2) 3. ábra. A mért (C’), a palást (Cp), és a korrigált (C) kapacitás értelmezése.
Az algoritmus lényege, hogy minden lépésben kiszámítjuk a fal kapacitását, összegezzük és kivonjuk a mért adatokból. A program folyamatábrája a 4. ábrán látható.
Ezután kiszámítjuk a teljes mélységet (lteljes(x)) (3), ahol a d a lemart réteg, míg a w(x) a előzőleg kiszámolt kiűrített réteg vastagsága, x az aktuális lépés száma: (3)
Ezt követően kiszámítjuk a fal kapacitás növekményét (4), ahol a Cfal(x) a fal kapacitás, ε0 a vákuum permittivitása, εr a relatív permittivitás, az A az átmenet felülete, x az aktuális lépés száma: (4)
Utána az (5) egyenletben összegezzük a fal kapacitásokat , így megkapjuk a palást kapacitásokat (Cpalast(x)), ahol a k a futó változó, x az aktuális lépés száma: (5)
Ezután kivonjuk a mért kapacitásból (Cmért(x)) a palást kapacitást( Cpalast(x)) (6), x az aktuális lépés száma: (6) 4. ábra. A kompenzáló algoritmus folyamatábrája.
Végül kiszámítjuk az kompenzált adalékeloszlást 117
(7), ahol a Ckorrekt(x) a adalékatomok korrigált kapacitáseloszlása, az Nkorrekt(x) az adalékatomok korrigált eloszlása, a q az egy elektron töltése, az ε0 vákuum permittivitása, az εr a relatív permittivitás, a V az átmenet előfeszítése, míg az A az átmenet felülete, x az aktuális lépés száma: (7)
Az 5. 6. 7. ábrán az itt ismertetett szimuláció eredményeit egy 100, 1000 és egy 10000 mérésből álló minta adatsoron vehetjük szemügyre. A vizszintes tengelyen a mélységet lineáris, míg függőleges tengelyen a töltéshordozó koncentrációt logaritmikus skálán ábrázoljuk. 100 mérési pont esetén a futási eredmény a 5. ábrán látható.
7. ábra. A kompenzáló algoritmus működése 10000 mérési pont esetén.
Irodalomjegyzék
[1.] Nemcsics Ákos, „Az epitaxiális GaAs rétegszerkezetek minősítése elektrolitos C-V méréssel,” Elektronikai Technológia, Mikrotechnika, pp. 78-90, 1993. [2.] Polaron PN4100 Semiconductor Profile Plotter Instruction Manual, 1982.
5. ábra. A kompenzáló algoritmus működése 100 mérési pont esetén.
1000 mérési pont esetén a futási eredmény a 6. ábrán látható
6. ábra. A kompenzáló algoritmus működése 1000 mérési pont esetén.
10000 mérési pont esetén a futási eredmény a 7. ábrán látható.
118
NOSQL IN WIRELESS SENSOR DATA STORAGE Zoran Balkić, Mirko KĂśhler, ÄŒaslav Livada Department of Automation and Process Computing Faculty of Electrical Engineering, J. J. Strossmayer University of Osijek Osijek, Croatia email: zoran.balkic@etfos.hr, mirko.kohler@etfos.hr, caslav.livada@etfos.hr
Abstract: Wireless sensor networks (WSN) are very popular nowadays as cornerstone for spatially distributed sensor communication platforms. Building the WSN out of hundreds or even thousands of nodes results in the large quantities of data with dynamic data schema characteristics that needs to be collected, processed and stored. Data diversity from collected sensors for very specific applications detects new problems of using generic storage techniques like relational database management systems. This paper presents alternative approach in the heterogeneous data storage based on the NoSQL movement that has been introduced lately with horizontal and vertical scalability in mind. By implementing the generic web framework for data acquisition and storage solution that includes NoSQL key/value storage represents concept of scalable data storage infrastructure that depicts needs for rapid development framework with WSN data requirements. Keywords: NoSQL solution, Wireless Sensor Networks, scalability, schemaless data I. Introduction Wireless sensor networks (WSN) have received significant recent attention in both the networking and operating systems communities [1, 2]. These networks are predicated on advances in miniaturization that will make it possible to design small form-factor devices with significant on-board computation, wireless communication, and sensing capabilities. Awaiting the development of WSN devices, recent work has also begun exploring potential applications of sensor networks for instrumenting and monitoring various environments. Examples of such applications include: monitoring in-building energy usage for planning energy conservation [3]; military and civilian surveillance [4]; fine-grain monitoring of natural habitats with a view to understanding ecosystem dynamics [5]; data gathering in instrumented learning environments for children [6]; and measuring variations in local salinity levels in riparian environments [7]. The variety of these applications clearly conveys the enormous potential impact of wireless sensor networks.
A lot of WSN characteristics are different them from classic wired and wireless networks. WSN in the most applications will operate unattended and untethered. Devices used in the WSN are mostly battery powered and energy consumption is of high importance during design phase. Furthermore, in WSN, devices interact with the physical world and generate data about locally observed events. Those event types dictate the sensor networks design in a data-centric way: the low-level communication primitives in these networks are designed in terms of named data rather than the node identifiers used in traditional networked communication [8]. Sensor networks usually generate large amount of data collected from their sensors at particular nodes and all those data must be successfully transmitted and stored for future analysis, visualization and long term persistency. Conventional relational database systems are widely used for WSN data persistency. Cloud computing strategy can help system developers and users to efficiently use available resources and speed up the communication and data transfer. WSN are consisted of hundreds or 119
even thousands of nodes that do not necessarily monitor the same type of data (humidity, temperature, pressure and so on). Figure 1. Consists of WSNs, cloud infrastructure and the clients. Clients seeks services from the Cloud system. WSN consists of physical Wireless Sensor Nodes to sense different applications like Weather stations, Transport and Environmental monitoring. Each sensor is programed with the required application logic. Those heterogeneous data types introduce extra groundwork in the field of online storage systems based on Relational Database Management Systems (RDBMS) data models. Often, WSN transmits few out of whole range of data that it is designed for. Sometimes, due to sensor failure or lack of consistent data some of the estimated data packets will not be transmitted, thus not stored.
following solution types shown in Figure 2.: •
Distributed key - value stores
•
Distributed column family stores
•
(Distributed) document databases
•
Graph databases
Figure 2. NoSQL Ecosystem
Traditional Relational database management systems are designed with those unique features: •
Table based
• Relations between distinct table entities and rows •
Referential integrity
• ACID (atomicity, consistency, isolation, durability) transactions •
Arbitrary queries and joins
Figure 1. WSN to Cloud Computing data flow
II. Nosql overview Not only SQL (NoSQL) movement introduces the new data storage paradigm which allows elastic and dynamic data model which scales and does not imply strict rules during design time. Another argument is that RDBMS does not scale well and in the future with numerous WSN networks transmitting sensing data systems will have to deal with huge amounts of schema-less data. There are very good reasons for choosing an RDBMS as long as the amount of data are not prohibitive. There are however equally good reason not to do so and choose one of the 120
Figure 3. Entities distribution model with NoSQL and RDBMS
NoSQL solutions lack some of the features that define an RDBMS solution. They do so for the reason of scalability. That does however not mean that they are only datastores, Document, Column Family and Graph databases are far from unstructured and simple.
III. Data models A data model defines how application stores data and it makes associations. Inherited from the RDBMS systems an incorrect data model once forced during data design will, later on, probably create a lot of problems for the developers especially in the field of WSN application design and deployment. In the traditional relational model limitations are imposed due to normalization as data grows, simultaneously complexity of the database is getting higher. Furthermore, write operation (Figure 3.) to the enlarging database becomes a bottleneck which in turn results in the poor horizontal scaling of the system. Noticing those limitations NoSQL movement came with practical solutions for different situations. Advantages of NoSQL solutions are following: •
Flexible (schema-less)
•
Scalability of the box
•
New data models
NoSQL drawbacks arise from the fact that there is no common standard and they are relatively immature comparing to the well known RDBMS solutions. 1)
Key-Value Stores
Main advantage of the key-value NoSQL data stores is ability to access a value based on a unique key. This principle is found in many programming languages (Java – Hash table, PHP – associative array). For this reason key-value stores are blazingly fast as they often support in memory operations. On of the most known implementations of key-value store is Redis [9] which offers incredibly fast performances, command line interface (CLI) and out of the box publish/subscribe mechanism which allows simple, yet powerful, communication solutions. Besides those strengths Redis can be limiting with very complex uses cases implementation. 2)
Column stores
In many cases it behaves similar to relational databases, but turns around well known table model to the column stores which exposes data
as a stream of data. Advantages of this approach is build in multidimensional data storage. Java implementation of column store called LucidDB [10] offers wide range of attractive features such as: page-level multi-level versioning, labeling, intelligent prefetch etc. 3)
Document databases
Document database provide access to structured data without a predefined schema. Self contained object contains buckets of key-value pairs. One of the most important representative for the document databases is MongoDB [11] which introduces familiar query language and was amount of developer toolkits. Another document database which offers attractive features is CouchDB [12] which supports bilateral replication (master-master) and RESTful JSON API. 4)
Graph databases
Flexible graph model that contain nodes which have properties and relationships instead of tables, rows and columns are graph databases. The strengths of graph databases is that they are fully transactional with flexible API which comes with the price of complexity and working with them imposes a completely different way of thinking. They are mostly used in applications that require social relations, network topologies etc. Neo4J [13] is the one of the representatives which stores data in the nodes and relationships of a graph. The most generic of data structures, a graph elegantly represents any kind of data, preserving the natural structure of the domain. IV. NOSQL AS WSN APPLICATION DRIVEN SCHEMA Taking into the consideration the nature of WSN networking model, NoSQL arises as a logical counterpart for the data persistence. Thousands of cheap commodity hardware machines providing speed and redundancy are major part of the NoSQL idea. Inverted search index which is necessary in many cases performs badly on usually available hardware. Introducing Cloud computing paradigm, particularly Software-as-a-service (SaaS) shifts traditionally SQL-like data stores back to old age. 121
and Networking (Mobicom 2000), 2000. [3.] Brainy Buildings Conserve Energy. Center for Information Technology Research in the Interest of Society. www.citris.berkeley.edu/SmartEnergy/ brainy.html
Figure 4. Size versus complexity
Investigating the real life WSN applications, NoSQL key-value data stores propose the ideal solution for the organic dynamic schema data storage regarding size versus complexity depicted in Figure 4. Key-Value database, such as Redis, offers different types of keys to be used: strings, hashes, lists, sets and supports transactions. Publish/ subscribe protocol for instant messaging is built in the system providing easy mechanism for development of responsive user interfaces. IV. Conclusion The communication among sensor nodes using Internet is a challenging task since sensor nodes contain limited bandwidth, memory and low power batteries. The issues of storage capacity may be overcome by widely used cloud computing technique. In this paper, we have discussed some issues of cloud computing and sensor network. Future work should consider detailed analysis and comparison of all NoSQL data stores versus RDBMS in the field of real life WSN applications data storage.
References
[1.] J. Hill, R. Szewcyk, A. Woo, S. Hollar, D. Culler, and K. Pister. System Architecture Directions for Networked Sensors. In Proceedings of the International Conference on Architectural Support for Programming Languages and Operating Systems, 2000. [2.] C. Intanagonwiwat, R. Govindan, and D. Estrin. Directed Diffusion: A Scalable and Robust Communication Paradigm for Sensor Networks. In Proceedings of the Sixth Annual ACM/IEEE International Conference on Mobile Computing
122
[4.] D. Estrin, R. Govindan, J. Heidemann, and S. Kumar. Scalable Coordination in Sensor Networks. In In Proc. of ACM/IEEE Mobicom, 1999. [5.] A. Cerpa, J. Elson, D. Estrin, L. Girod, M. Hamilton, and J. Zhao. Habitat Monitoring: An Application-Driver for Wireless Communication Technology. In Proceedings of the First ACM SIGCOMM Latin America Workshop, 2001. [6.] M. Srivastava, R. Muntz, and M. Potkonjak. Smart Kindergarten: Sensor-Based Wireless Networks for Smart Developmental ProblemSolving Environments. In Proceedings of the Seventh Annual ACM/IEEE International Conference on Mobile Computing and Networking (Mobicom 2001), 2001. [7.] D. Steere, A. Baptista, D. McNamee, C. Pu, and J.Walpole. Research Challenges in Environmental Observation and Forecasting Systems. In Proceedings of the Sixth Annual ACM/IEEE International Conference on Mobile Computing and Networking (Mobicom 2000), 2000. [8.] C. Intanagonwiwat, R. Govindan, and D. Estrin. Directed Diffusion: A Scalable and Robust Communication Paradigm for Sensor Networks. In Proceedings of the Sixth Annual ACM/IEEE International Conference on Mobile Computing and Networking (Mobicom 2000), [9.] Redis datastore; www.redis.io [10.] Lucid DB; www.luciddb.org [11.] MongoDB; www.mongodb.org [12.] CouchDB; couchdb.apache.org [13.] Neo4J; neo4j.org
VISUALIZATION OF DYNAMIC SPATIOTEMPORAL DATA Zoran Balkić, Tomislav Keser, Goran Martinović Department of Automation and Process Computing Faculty of Electrical Engineering, J. J. Strossmayer University of Osijek Osijek, Croatia email: zoran.balkic@etfos.hr, tomislav.keser@etfos.hr, goran.martinovic@etfos.hr
Abstract: Visualization of dynamically changing data is one of the most important tasks in today’s information systems. Due to the increase in capacity to record vast amount of data provided by modern technologies (GPS and remote sensing), the number and size of dynamic spatiotemporal data are increasing rapidly. Using modern software and hardware systems this paper presents simple but powerful method of data acquisition, storage and visualization based on spatiotemporal nature of all data. With the rapid development of the Internet in the recent years, web services are among the most powerful tools to collect and disseminate data and specifically data with spatiotemporal component. This method is not limited to the real world objects but could be applied to virtual environments as well, which are beyond capacity of human senses and our perception of the real world. Keywords: spatiotemporal data, interactive visualization, GPS, embedded systems, web services I. Introduction Data visualization techniques plays key role in the process of analysis, decision making and understanding of large quantities of collected data. The world we live in is not static and changes are occurring in the all aspects over time. Understanding dynamics to discover spatiotemporal patterns, trends and relationships is an important step to many global, regional and local phenomena [1]. Trajectory data of moving objects, as dynamic spatiotemporal data prototypes [2], such as cars, humans, hurricanes recently are gaining attention because of inexpensive methods of data collection, storage and dissemination. Those data are playing the crucial role in the decision-making process in all application domains.
Interface (API) •
Modern Internet browser support
•
Standard data format support
Data visualization that satisfies the needed requirements could be separated into several categories: JavaScript graphical user Interfaces, dedicated Adobe Flash/Flex solutions and specialized spatiotemporal viewers (Google Earth). Important characteristics of the collected spatiotemporal data should be considered as generic question triangle regarding data analysis (Figure 1).
Visualization techniques are important part of the complete system for display and data analysis. Among many available commercial and open source visualization solutions few of them satisfy the following requirements: • Spatiotemporal data filtering and visualization (map view, temporal view, attribute view) •
Client side Application Programming
Figure 1. Question triangle in spatiotemporal data exploaration
123
To get the proper answer to stated questions user needs a Graphical User Interface (GUI) which supports partial data filtering and visualization. To answer the “where” question map view is essential, for the “what” question user needs an attribute view and for the “when” question temporal view is needed. As shown in Figure 1. all those questions are related to each other. On the basis of those three root questions there is many more questions related to objects in spatiotemporal domain and some of them could be found in the [3]: • What objects are present in location p at time t? • At what moment (if exists) did object o was at location p? • How did the location of the object o change from time t1 to t2? • What were the relative positions of objects o1 and o2 at time t? More specific and practical questions are mentioned in the [4]: • etc.?
What is the trajectory of cars, whales
• Which routes are regularly used by trucks? • At what speeds does airplane move? What is the top and average speed? II. Related work Different research was conducted in the field of visualization techniques which include different client side software solutions. In the [5] proposed Topmodel using web map services for modeling an interactive visualization frameworks showed the solution which combines Java with Web Map Service (WMS). Proposed framework works only with static movements. Another framework [6] that proposes visualization applying Extensible Stylesheet Language Transformations (XSLT) to the Geography Modeling Language (GML) and generated 2D Scalable Vector Graphics (SVG) combining the 3D modeling language and eXtensible 3D (X3D) graphical representations. This solution is useful 124
for exploration and analysis of large volume of spatiotemporal data but lacks interaction with dynamic movements over desired time period. Different approach that was introduced by [7] presents implementation by using Google Maps API for Flash/Flex which enables the simple interaction such as zoom in/out but not provide necessary functionality for dealing with time dependent data series. Exploration of the iceberg movement data was used in [8] which used Open Geospatial Consortium (OGC) standard for spatiotemporal data storage combined with Scalable Vector Graphic (SVG) model for visual exploration. The last approach has limitation in visualizing large number of object through long time period and other important characteristics, such as distances, durations were not considered. III. Visualization techniques Available visualization techniques are mostly used as specialized client software components both as standalone applications and pluggable web solutions. Main difference between those two solutions is manifested in several fields like: openness in the context of open sourced or commercial, standalone applications or web API client solutions, performance in the sense of speed and reliability. This paper focuses on the web visualization libraries, or libraries that could be used in the web context. Based on the stated requirements seven visualization libraries are chosen to be investigated for the purpose of visualization of dynamic spatiotemporal data. Those libraries are: ArcGIS Javascript API, Processing, OpenLayers, Google Maps API, SIMILE, Timemap and Google Earth web plugin. ArcGIS Javascript API [9] offers embedding visualization using maps and tasks from ArcGIS server into custom web applications. It provides the functions to display data on an interactive map in different projections. It also allows customers to draw graphic elements and provides pop-up windows as well as animation functions such as start and pause without controlling the speed of animation. Processing [10] is standalone open source solution with support for 2D context to JavaScript for
web based applications. Processing allows users to display data on the map view supporting web animation, image and interaction without using Flash or Java applets. Animation functions are available, but zooming, panning and pop-up windows are missing. OpenLayers is an open source JavaScript that allows users to display map data in web browsers without server side dependencies. Pure client side visualization techniques are useful in many heterogeneous environments. Geographic applications benefits from OpenLayers support for standardized geographic data access like OpenGIS Web Mapping Service (WMS) and Web Feature Service (WFS) standard. OpenLayers provides many capabilities including visualizing data on a map view, interactive functions (zooming, panning), filtering/querying, displaying geometries (points, lines and polygons), animations, data clustering etc. The only requirement that is not supported is timeline functions which are essential to answer “when� question thus exposing temporal view to the end users. Google Maps API [11] in widely used JavaScript web library that allows embedding Google maps into custom web pages. Google Maps API enables users to interact with several types of base layers: map, satellite, hybrid, traffic etc. Capabilities of showing geometries (points, lines and polygons) which could be styled by user preferences brings Google Maps API one of the best solutions for geo visualization. Overlays that could be shown on top of the map view support different spatiotemporal data such as: Keyhole Markup Language (KML) and GeoRSS. Both of those data formats support spatial and temporal data attributes which brings them as suitable solutions for spatiotemporal data transfer and visualization. Nevertheless, the Google Maps API has limitations in animation and timeline functionalities. SIMILE [12] is abbreviation for Semantic Interoperability of Metadata and Information in unlike Environments. SIMILE is JavaScript library that aims at creating robust, open source tools that enable users to access, manage, visualize and reuse digital assets. SIMILE is supported by
Asynchronous JavaScript and XML (AJAX). AJAX provides a capacity to get data from the server asynchronously without disturbing the displays in non blocking way of the existing page. SIMILE allows users to do filtering, brushing and searching by using Exhibit facet. However, SIMILE does not support users in displaying line objects or drawing lines. It also does not provide animation function.
Figure 2. Timemap visualization user interface
Timemap [13] is an open source JavaScript library which allows user to combine Google map with SIMILE timeline (Figure 2.). As a result, it provides functions from both Google Maps API and SIMILE timeline. Timemap allows users to display geometries on the map and timeline simultaneously. The timeline view is linked with the map view. This library allows user to load multiple datasets in Javascript Object Notation (JSON), KML, or GeoRSS format. The filtering function that is provided by Timemap is very simple. It does not allow users to do filtering several objects at the same time. Timeline can be considered as a linear temporal legend. Google Earth [14] and its web plugin is the standalone free and commercial visualization solution which proves to be most complete and stable software component. By employing the web plugin it allows usage of installed Google Earth as embedded object in the targeted web pages. It supports rendering of KML format which supports time attributes that are essential for spatiotemporal data. Its timeline view (Figure 3.) with start and stop function as well as timeline playback speed makes visualization of data in space and time domain really intuitive and easy to use.
125
using standard National Marine Electronics Association (NMEA) format for GPS and time attributes more data is added to the NMEA sentences (accelerometer and gyroscope in all three axes) as standard text suffix. Figure 3. Google Earth time slider filter view
IV. Building the prototype This chapter describes the process of prototype implementation. Main objectives of this prototype are to provide end users with overview as well as detailed view of collected data in one interactive map. Data has been collected from the moving object which had embedded system mounted on top. Figure 4. depicts main building blocks of this prototype which are following: embedded device, Android mediator, network infrastructure (wireless network), Java server and end user client computer. Data should be collected from the embedded device mounted on a vehicle during motion with GPS, accelerometer and gyroscope data, than transported via Bluetooth to Android mediator which performs instant visualization (for control purpose) and finally transmitted over HTTP to the custom built Java Enterprise Edition server for storage. Final step in the process is the generation of visualization files for data presentation and analysis.
Figure 4. Prototype system overview
Communication between two main system components (embedded device and Android mediator) is shown in Figure 5. Embedded system is programmed in such way that 126
The NMEA standard [15] uses a simple ASCII, serial communication protocol that defines how data is transmitted in a sentence form from GPS enabled device to the listener (in our prototype Android mediator).
Figure 5. Prototype system communication overview
Standard Java web application is developed using Eclipse development framework that deploys application onto Tomcat JEE web server which uses MySQL database for data storage. Since the raw data from the embedded system is standard NMEA format, specialized parser libraries have been developed for data normalization on Android device for quick data Map overview and standalone Java libraries for usage in the server environment. Since the data are collected in the real time and transferred to online system, Java parser libraries are available for offline use as well. Web application functional requirements can be stated as following: receiving incoming data, data consistency check, parsing of needed information (time, longitude, latitude, altitude, velocity, accelerometer and gyroscope data in all three axes of the referential system). Relational database MySQL is chosen for the data storage as one of the most used in standard server environments. Database table with normalized and parsed data readings is shown in Figure 6. MySQL supports the spatial extensions following the specification of the Open Geospatial Consortium (OGC) which is shown in column “GEO” as “GeoPoint” which
allows direct filtering of stored spatial data.
Figure 6. MySQL database table for data storage
V. Visualization prototype Development of the visualization user interface is built on top of widely used web standards such as Hypertext Markup Language (HTML), JavaScript and related frameworks (Google Charts API) and Google Earth web browser plugin. Web application written in Java, Java Server Pages (JSP) and Servlets is the corner stone of the system for data retrieval, parsing NEMA sentences, storage, preprocessing and visualization. Complete web application is deployed to Tomcat Web container running on the Debian Linux platform. As stated in chapter III. KML format is chosen for the data format that satisfies all requirements regarding spatial (coordinates, geometries) and temporal (time) components. KML is XML based standard, thus human readable, which can be served in compressed, zipped format as well. Compressed KML files are smaller in size and more convenient for transport over Internet connections. Specialized KML generators have been developed for visualization purposes, but could be stored for offline viewing and archival.
of created KML file. Timeline view shown in the top left corner enables user interaction and filtering based on desired timespan of the collected data. Standard options such as zoom in, zoom out, panning and data selection is also available. Other data attributes like object speed and acceleration are suitable for display only in the different perspective like timeline. For those purposes Google Chart API view is created and displayed as a separate view as shown in Figure 8. and 9.
Figure 8. Visualization of the measured speed of the object in motion
Figure 9. Accelerometer data visualization
Figure 7. Google Earth web plugin view of the collected data
Figure 7. shows generated view using Google Earth web plugin that presents visualization
VI. Conclusion and future work The visualization implementation process of the dynamic spatiotemporal data is presented in this work. In its nature all data do have spatiotemporal component even though it is not always necessary to store, process and analyze them. All dynamic data are consisted from set of attributes, in our case: acceleration, gyroscope, speed, altitude and spatiotemporal context domain is used to perform additional, more natural, view over collected data. All 127
generated visualization views could be accessed using standard web browsers. Presented visualization techniques are pure client oriented thus enhancing the data load speed over slower Internet connections. This paper also emphasizes the importance of preprocessing the data and has provided a solution for the problem of transformation raw data into relational database model. Presented visualization techniques provides a solid ground for the future work in the area of sports based motion trajectory data for the players and other objects. The examples show that a “picture is worth a thousand words”. Single summary visualization can reveal remarkable amount of information about performance, style and hidden data of objects in motion. Future research needs to address motion tracking and visualization in more complex use cases involving multi objects and their mutual interaction. Relational database system in the future should be replaced by more robust and modern database system based on NoSQL solutions such as Redis, MongoDB or similar.
References
[1.] C. A. Blok; Dynamic visualization variables in animation to support monitoring of spatial phenomena; PhD thesis; Utrecht, Enschede, Universiteit Utrecht; 2005.; p.p. 1-190. [2.] V. Bogorny, L. O. Alvares; Semantic Trajectory Data Mining: a User Driven Approach; Geographic PrivacyAware Knowledge Discovery and Delivery , Schloss Dagstuhl – Leibniz Zentrum fuer Informatik, Germany, Issue: 08471, (ISSN: 18624405); 2009 [3.] N. Andrienko, G. Andrienko, N. Pelekis, S. Spaccapietra, Basic Concepts of Movement Data. Mobility, Data Mining and Privacy. Springer Berlin Heidelberg; 2008 [4.] R. H. Guting, M. Schneider; Moving objects databases; Amsterdam etc., Elsevier Morgan Kaufmann; 2005 [5.] B. Huang; Web-based dynamic and interactive environmental visualization; Computers, Environment and Urban Systems, 27, 623-636.; 2003
128
[6.] J. Ying, D. Gracanin, C.-T. LU; Web visualization of geo-spatial data using SVG and VRML/X3D; Third International Conference on Image and Graphics, 2004 [7.] Visualizing graphic data set | insideRIA; Available from: www.insideria.com; [10/2012] [8.] T. Becker, B. Köbben, C. A. Blok; Timemapper: visualizing moving object data using WMS time and SVG SMIL interactive animations. In: Proceedings SVGOpen; 7th international conference on scalable vector graphics; 2009 [9.] ArcGIS Javascript API; resources.arcgis.com/ content/arcgis-api-javascript; [10/2012] [10.] Processing project; processingjs.org/; [10/2012] [11.] Google Map API; code.google.com/apis/maps/; [10/2012] [12.] Simile; [10/2012]
code.google.com/p/simile-widgets/;
[13.] Timemap; [10/2012]
code.google.com/p/timemap/;
[14.] Google Earth; earth.google.com; [10/2012] [15.] National Marine Electronics Association NMEA; www.nmea.org; [01/2012]
NEW SOLUTIONS IN INDUSTRIAL PROCESS VISUALIZATION AND CONTROLLING Zoltan Zidarics Department of Automation of Pollack Mihaly faculty of Engineering and Informatics of University of Pecs Pecs, Hungary email: zamek42@gmail.com
Abstract: The technical evolution forces to change the process controlling and visualization systems.Need more workstations and heterogen operating systems. We cannot increase the network traffic and cannot use old style bitmap based software. Solving the roblem is GWT and SVG. Keywords: Process controlling, visualization, GWT, SVG I. The task During the past few years mobile electronic devices managed to find and stabilize their place in the global electronics market. Due to their relatively high performance, the decreasing production costs and the pronounced impact on user behavior, these computers force professionals to rethinking industrial visualization and controlling systems. These mobile and handheld machines practically have the same computing performance and screen resolution as a desktop PC, while they have different operation systems with different approach towards user interface and general usability (e.g. GUI has no traditional “windows” structure). While users – especially managers and senior managers of companies – demands the ability of real-time remote monitoring, the legacy visualization software systems are unable to support these needs due to the following reasons: • These visualization software are designed for a specific operating systems, while all other instances (e.g. client software) need the same o.s. • They are designed for a definite screen size. Their graphical components are bitmaps, which are non-resizable online. • They are hardly scalable. If we increase the number of workstations, all workstations collect data from the same industrial network,
which would greatly increases the traffic of the network especially because graphical components are also produced by the remote servers. For example on a RS232/RS485 serial network it causes traffic jam, but it would demand extra resources from the network elements on TCP/IP too. • The external client requests are potential security risks. Based on the disadvantages above, we need to redefine the industrial controlling and visualization software. II. Prerequisites Before we begin programming, we need to state our prerequisites in order not to let our software become obsolete too early. The main targets are the following: •
Client-server architecture
•
Thin client technology
• Server side on architecture if it possible
micro-controller
• Able to serve multiple clients at the same time, with no explicit limitations in the number of clients if possible. • Be platform independent. No need to recompile to an other operating system. • Don’t demand an explicit screen size. The program has to resize itself to the current 129
screen size. • Security. After the Stuxnet/Duqu virus, it is a very important point of view. • An instance should adapt to the current configuration as easily as possible. No modification of the source files needed to adapt to the given configuration. •
Use open-source software.
III. The server Corresponding to the prerequisites above, server side would use Spring framework, which is an open source Java based framework and needs less resources than other J2EE frameworks. Based on this property we can run an embedded Linux environment and since Spring has modular architecture, we need to load only the necessary modules, while the size of the deployed web archive file would be only about 32Mb. Using this method we can build a PLC module for this system. Another key point of view is security. Spring is very flexible, it’s compatible with a lot of industrial authentication source and also a very intensely developed software framework, where mistakes are revealed shortly and maintenance is very frequent. The server communicates with the processes of the industrial network as a data-collector and a controller as well. It can log events of an SQL or NoSql database, and can serve as an alerting source to users via sms or email. Of course if it’s used on an embedded system, it might be practical to deploy the database on a higher performance hardware. The technology defined by an XML file, which contains the type and properties of each devices. Apart from this it’s only the SVG format drawing of the technology is need to be prepared. With this method we can flexibly adapt the software to the technology, while preserving verifiability. IV. The client On client side we can use GWT (Google Widget Toolkit). This framework is developed by Google, and insures the high software quality and maintenance. Beginning from version 2.5 130
the main direction of development is controlled by the software community instead of Google. Google strongly supports the security of GWT, and users can increase their software security level based on it. The GUI of technology and controlling systems are displayed on the client side. Since client side might run various operating systems sensitive data is not recorded on this side. In case of high-security environment we can inhibit to store user passwords from client side, so this way we can prevent unauthorized access even when the client hardware is lost. There is another important benefit of using GWT: visual display uses the resources of the client-side hardware, while server and client communicates only raw data. It means that server-side hardware extensions are not needed to satisfy the demands of the increasing number of clients. V. Visualization The displayed graphics need to be attractive and informative as well. The prerequisites require the client to on-line resize itself to the screen size. It excludes the using of bitmaps. Fortunately we can use SVG which is a vector graphic standard of W3C supported by most of browsers. The standard was established in 2001 with version number 1.0, and ver1.1 arrived in 2003. which is still valid since then. The standard has the ability of excellent and very sophisticated drawing (e.g.: animation is also supported). With various modifications and solving an auto-resizing problem it is now possible to use SVG formatted pictures in GWT framework. Unfortunately there are browsers which cannot fully support SVG. The following table contains the abilities of each browser:
Browser
SVG supporting
fault
Firefox > 11.0 Windows/Linux/ Android Opera > 11
100%
-
Windows/Linux/ Opera Chrome > 21
100%
100%
Windows/Linux/ Android Chrome IoS/OSX 90% Safari IoS/OSX
90%
Internet Explorer > 8
70%
-
-
•
512Mb RAM,
•
4Gb class 4(!) SD card,
•
Linux 3.2.13-x7 kernel,
•
Debian Wheezy operating system,
• Java OpenJdk environment, •
No animation No animation No animation Extremely slow Mysterious errors because poor implementation
the database on a higher performance hardware. As we can see, we can serve most of the client devices. Unfortunately the Apple’s OSX and IOS cannot support the animation. In Windows there are three dominant competing browsers, so the poor quality of Internet Explorer is easy to solve by a simple browser change. The drawings were made with an open source multi-platform graphical software named Inkscape. Each modules has corresponding SVG files, so the modifying is very easy, we only need to run a script on each file to convert them to the correct format. This script copies the files to the convenient directory too. SVG also allows any graphical element to generate events, so it is possible to create complex mechanisms in the client software easily. VI. Testing I made an application to test the concept in the worst-case environment. The prerequisites instruct us to make it on micro-controller platform. I used a Beagleboard XM Rev-C, which is an embedded Linux board. The main properties of Beagleboard are: •
processor,
Texas Instruments Cortex A8 1Ghz
7
Java
runtime
Jetty 8 web server,
• PostgreSql 9 database server on another machine. The configuration is a low budget device at this time. If it works well, at satisfying speed with long uptime, we would be able to consider our idea functional. The test started at 28.10.2012., and it continuously used at least one client. The server can be reached on the following address: http://zamek.pmmf.hu:8080/gwt/Argus.html VII. Prices and licenses Component License Google Gwt Apache v.2.0 Gwt-svg lib LGPL Spring Apache v.2.0 framework OpenJDK GPL Linux GPL Debian GPL PostgreSQL BSD BeagleBoard XM rev-C
Price $0 $0 $0 $0 $0 $0 $0 ~$125.00
VIII. Who should interests? PLC manufacturers The low budget hardware allows manufacturing it as a module of PLC. With this method manufacturers can give a high quality and low cost device to their customers. Visualization software manufacturers Due to the software components, manufacturers can develop highly configurable systems, and with applied components the software manufacturing can be automated. The system 131
itself is open source and platform independent. Automotive industry The users of on-board computer systems require to use heterogeneous operating systems for everyday usage and diagnostic purposes of these systems. The main point of view – based on large number of devices - would be cheaper and more economic manufacturing. There will be no need to use expensive embedded systems, if the resource demands of the visualization remains on client side. EIB/KNX manufacturers The EIB/KNX manufacturers face the same problems as above, while serving more heterogeneous and simultaneous client requests. IX. Conclusion Based on our prerequisites, we managed to reach most of our goals using this concept. The remaining incompletion is coming from the different behavior of each browsers. We can trust, manufacturers to adapt completely the standard in the future. The attempt can serve an evidence to achieve the defined targets.
132
SIP 2012 30th INTERNATIONAL CONFERENCE SCIENCE IN PRACTICE PÉCS, HUNGARY, OCTOBER 29-30, 2012
ISBN 978-963-7298-53-0