【機(jī)械類畢業(yè)論文中英文對(duì)照文獻(xiàn)翻譯】智能機(jī)器人
【機(jī)械類畢業(yè)論文中英文對(duì)照文獻(xiàn)翻譯】智能機(jī)器人,機(jī)械類畢業(yè)論文中英文對(duì)照文獻(xiàn)翻譯,機(jī)械類,畢業(yè)論文,中英文,對(duì)照,對(duì)比,比照,文獻(xiàn),翻譯,智能,機(jī)器人
Intelligent Robots
Industrial robots have been developed to perform a wide variety of manufacturing tasks, such as machine loading/unloading, welding, painting, and assembling. However, most industrial robots have very limited sensing ability. For example, if the assembly parts are not presented to the robots in a precise, repetitive fashion, the robots simply can not perform its jobs, because it can not find the part and pick it up. If an invading object happens to enter the work space of a robot, the robot may collide with it, resulting in damage to both, because the robot can not see it as obstacle. In other words, existing robots are far from being intelligent, and they can not hear, see, and feel (force, shape, texture, temperature, weight, distance and speed of objects in its grasp or nearby). Imagine the difficulty involved in teaching a person with all these handicaps to perform precise assembly operations. This tells why we need intelligent robots.
An entirely new phase in robotic applications has been opened with the development of “intelligent robots”. An intelligent robot is basically one that must be capable of sensing its surroundings and possess intelligence enough to respond to a changing environment in much the same way as we do. Such ability requires the direct application of sensory perception and artificial intelligence. Much of research in robotics has been and is still concerned with how to equip robots with visual sensors-the “eyes” and tactile sensors-the “fingers”. Artificial intelligence will enable the robot to respond to and adapt to changers in its tasks and in its environment, and to reason and make decisions in reaction to those changes.
In the light of these requirements, an intelligent robot may be described as one that is equipped with various kinds of sensors, processes sophisticated signal processing capabilities and intelligence for making decisions, in addition to general mobility.
One important capability that humans demonstrate when performing intelligent tasks is receiving sensory information from their surroundings. Human senses provide a rich variety of sensory inputs that are critical to many of the activities performed. Much effort has been made to simulate similar sensory abilities for intelligent robots. Among them, vision is the most important sense as it is estimated that up to 80% of sensory information is received by vision. Vision can be bestowed on robotic systems by using imaging sensors in various ways. For improving accuracy of performances, it can help precisely adjust the robot hand by means of optical feedback control using visual sensors. Determining the location, orientation, and recognition of the parts to be picked up is another important application. For example, vision is necessary to guide a seam-welding robot to perform mating two parts together and weld them. Regardless of the application, the vision system must contain an illumination source, an imagery sensor, an imagery digitizer and a system control computer.
If ambient lighting serves as illumination source, the imaging process is passive. This type of imaging is common in military applications since the poison of the viewer is beyond the control of the viewer. But in industrial applications, controlled illumination or active imaging can be used as freely as possible.
The imagery sensor of a robot vision system is defined as an electrooptical device that converts an optical image to a video signal. The image sensor is usually either a TV-camera or a solid-state sensory device, for instance, charge-coupled devices(CCDs).The latter device offers greater sensitivity, long duration of service and light weight, and is thus welcome when compared with the TV-camera.
The camera system contains not only the camera detector but also, and very importantly, a lens system. The lens determines the field of view, the depth of focus, and other optical factors that directly affect the quality of the image detected by the camera. Novel techniques, such as a fish-eye lens used to obtain a 360-degree field of view without the need of focus adjustment, have recently been investigated and proved very useful in mobile robots.
Either TV-cameras or CCDs produce an image by generating an analogue value on every pixel, proportional to its light intensity. To enable a digital computer to work with analogue signals, as the TV-camera scans a scene, its output must be converted to a digital code by an analogue-to-digital(A/D)converter and stored in the random access memory(RAM) installed in the computer. The computer then analyzed the digital image and extracts such imagery information as edges, regions, boundaries, colors and textures of the objects in the image. Finally, the computer interprets or understands what the image represents in terms of knowledge about the scene and gives the robot a symbolic description of its environment.
Next to vision in importance is tactile sensing or touching. Imagine the blind can be do delicate jobs relying on his/her sensitive tactile! A blind robot can be extremely effective in performing an assembly task using only a sense of touch. Touch is of particular importance for providing feedback necessary to grip delicate objects firmly without causing damage to them.
To simulate tactile in human hands, a complete tactile-sensing system must perform three fundamental sensing operations:(1) joint force sensing which senses the force applied to the robot’s hand, wrist and arm joints , (2) touch sensing which senses the pressure applied to various points on the ripper’s surface, (3) slip sensing which senses any movement of the object while it is being grasped.
The joint forces are usually sensed using various strain gauges arranged in the robot wrist assembly. A strain gauge is a force-sensing element whose resistance changes in proportion to the amount of force applied to the element.
The simplest application of touch sensor is a gripper that is equipped with an array of miniature micro switches. This type of sensor can only determine the presence or absent of object at a particular point or an array of the robot hand. A more advanced type of touch sensor used arrays of pressure-sensitive piezoelectric material (conductive rubber, conductive foam, etc.). This material conducts electrical current when stressed. This arrangement allows the sensor to perceive changes in force and in pressure within the robot hand. The size of matrices of tactile sensors ranges from 8×8 to 80×80 2-D arrays. Since the force at each point can be determined, the forces on its surface can be determined respectively. The force data can be used to display on a TV-like screen the shape of the object and the distribution of force on its surface.
Slip sensing is required for a robot to create the optimum amount of grasping force applied to a delicate, fragile object. This ability prevents damage to the object and allows the object to be picked up without the danger of being dropped. The method used to detect slip is based on sensing any vibration of any movement between the object and the gripper, no matter how subtle it may be. The gripping force is increased step by until the object has been firmly grasped and no more slip occurs.
The integration of tactile sensing and vision sensing can dramatically enhance adaptive robotic assembly task. An example of this type of sensors would be a vision sensor used to locate and identify objects and position of the robot itself, combined with a tactile sensor used to detect the distribution of force and pressure, determine tongue, weight, center of mass and compliance of the material it handles. This hand-eye coordination for general-purpose manipulation would be extremely powerful in the industrial world.
Another important sensing for a robot is range, or depth information. The range data are necessary for a robot to create its 3-dimensional special information when it carries out real-world navigation. Such information is required whether the robot is stationary and navigating its gripper or mobile and navigating its body. For example, consider an industrial bin picking operation where a sensing robot can locate objects of interest in the bin, even though it does not know exactly where they are. The sequence of operations for such a robot might go something like this:
(1) Scan the parts in the bin and locate the object of interest in three-dimensional space.
(2) Determine the relative position and orientation of the object.
(3) Move its gripper and/or its body to the object location.
(4) Position and orient the gripper according to the object’s position and orientation.
(5) Pick up the object.
(6) Place the object at the required location.
The first step and the second step in this sequence determine the distance and orientation to the object in the three-dimensional or 3-D space.
The range sensors are classified into two categories: passive devices and active devices. As a passive device, a vision system equipped with stereo camera can be employed to determine depth by means of triangulation. However, vision systems are relatively sophisticated and expensive. As an active device, ultrasonic ranging systems have been widely used to give environmental information to a mobile robot. Most ultrasonic ranging systems are based on some form of time-of-flight measurement. The system works in the following manner. A transmitter emits ultrasonic waves toward an object, return to the transducer. Some of the waves echo from the object, return to the transducer. The ultrasonic sensor determines the range by measuring the elapsed time between the transmission and reception, that is time of flight, from which the distance (d) can be calculated by
d = c×tof/2
where tof = time-of-flight and c = sonic speed
Proximity sensing is for detecting objects existing or relatively moving in close vicinity to the robot’s hands or legs. To detect object objects within short distance, or with high speed and acceleration, proximity sensors usually work by measuring the distortion of electromagnetic field when the object lies near it, or the deviation of frequency in a sonic wave when the object moves toward or away from the robot (making use of Doppler effect). Proximity sensing is widely used in intelligent robots to detect obstacles prior to possible collision.
For some specialized intelligent robots, chemical sensors that simulate the sensor of smell and taste of animals may be important. For example, a robot that can detect odor of some specific substance, for instance, cocaine, can help customs officer to track drug traffickers; a robot that can sense subtle odor of a dangerous pollutant can help conservationists to detect its leakage.
Another feature that distinguishes intelligent robots from conventional robots is the ability to alter action plan in response to unexpected changes in environment. A robot plan comprises a series of actions which the robot will take in a step-by-step fashion to achieve its goal. Robot planning is a subject of artificial intelligence, especially when the plan involves interfering actions. Interfering actions will be observed when an action renders invalid an achieved precondition needed by another action that is supposed to be taken later in the plan. Obviously, interfering actions in robot planning will render the whole or part of the plan invalid. There is no general solution to interfering actions in robot planning. In some complex cases, there seems to be no way to remove all interferences simply by changing the order by which action take place. But for simple cases, robot-planning is the easiest subject in artificial intelligence.
Based on data provided by its sensing systems, which reflect the changes in its environment, the planning system of an intelligent robot makes decisions to alter its action plan, either partly or completely, to respond to those changes without the request for human intervention.
With senses and adaptive control, an intelligent robot appears autonomous and is able to adapt itself to its jobs and environment. For example, it can recognize objects, distinguish obstacles from its surroundings, take actions to avoid obstacles, adjust its position and attitude and the forces it applies on objects when performing jobs, arrange proper working order for better efficiency, coordinate its two hands, etc..
Finally, let us take a brief look at an example for intelligent robots.
This mobile robot called Help Mate was developed by Transition Research Corporation (TRC) in Danbury, Connecticut, U.S.A. Help Mate currently carries off-schedule (late) meal trays from the dietary department to the nursing stations at Danbury Hospital. Future applications include transportation of pharmacy and lab supplies, patient records, and mail. A robot that can perform transportation tasks can relieve nurses of their frustrating and time-consuming task and free up time for patient care. Approximately, 90 late meal trays are hand-delivered each day at Danbury Hospital, a 450-bed facility. Late meal trays result from new admissions, diet order changes, transferred patients, special requests, etc..
Help Mate stands 54 inches high with a footprint of 28×30 square inches. It weights 350 pounds and can carry loads up to 50 pounds and can travel at 500 mm/second in Danbury Hospital except in patient corridors where it travels at 300 mm/second. The maximum velocity is 1 meter/second and the maximum acceleration is 1 meter/second.
The front view shows the placement of the sensors, warning lights and emergency stop switches. This rear view shows the backpack and the user interface control panel. A flashing amber light on top of the robot visible from any direction warns people that Help Mate is moving autonomously under its program control.
To carry late meal trays from the dietary department to nursing stations distributed among the six buildings and twenty floors at Danbury Hospital, Help Mate must be able to call and ride the elevators. Help Mate communicates with elevator controller via infrared transmitter/receiver pairs mounted on the elevator, the Help Mate, and in each elevator, closes the doors, and requests the desired floor. Once the elevator reaches the desired floor and the door opens, Help Mate leaves the elevator, signals for the door to close, and continues on its way to the nursing station.
Vision, sonar and infrared proximity sensors are in navigation and obstacle avoidance. An array of Polaroid (a company) ultrasonic proximity sensors is used to measure range to obstacles and walls, while the infrared proximity sensors are used to detect edges and retro reflective tape used as beacons during navigation.
An auxiliary subsystem handles operator interface functions and a drive subsystem steers the vehicle and provides odometry. All of the sensor subsystems, the drive and auxiliary subsystems and the 68 000 master/slave protocol.
The most significant capability of the Help Mate is its autonomous navigation in hall-ways, elevators, lobbies, and doorways. Given a perfect drive system and an environment free of obstacles, odometry-based navigation would suffice. In the real world, sensors must be used to recognize known landmarks, avoid a wide variety of obstacle, including humans, and to correct accumulating odometry errors. Sensor data is fused into a composite local map. Navigation information is then derived from the map. Help Mate uses as many natural landmarks as possible for navigation and registration. These local landmarks are encoded into the database. Raw sensor data are processed in real time on a 68 000 microprocessor to recognize landmarks and “register” the local model against a prior world model.
An operator inserts his/her ID card to unlock the backlock and inserts a payload, such as meal trays. This operator then selects a destination from the computer operation menu with keyboard and removes the card. Once the card is removed, the backpack locks securely. The payload may only be removed at the destination and only by using a specific card for each nursing station.
Help Mate arrives at the specified destination and sounds a buzzer that alerts unit secretary that a package has arrived. The unit secretary then inserts her card and gains access to the operation menu system. The backpack is unlocked and the secretary takes contents of delivery, signs the delivery slip and places it back in the backpack. She then removes her card which locks the backpack and dispatches Help Mate to its next destination.
From this example, we can see that the long-awaiting development of robots for domestic service is rapidly becoming feasible. For the elderly, intelligent robots would provide simple home cleaning, food preparation, security and entertainment services. As the cost of computer and electro-mechanical devices continues to decrease, intelligent robots for services in transportation, hospital and home service will become economically feasible.
The near future will see the prosperity of intelligent robots. In unmanned workshops, intelligent robots will be the major work force taking delicate, boring and dangerous jobs, replacing human workers. In some environments, such as underground, underwater and outerspace, intelligent robots will be the only laborers working tirelessly. By 2007, intelligent robots as trial-blazers will land on the Mars to undertake pioneering expeditionary missions before astronauts set force to the Mars. Believe it or not!
中文翻譯
智能機(jī)器人
工業(yè)機(jī)器人已被開(kāi)發(fā)來(lái)完成種類廣泛的制造任務(wù),例如,給機(jī)器裝卸零件、焊接、噴漆、裝配,但大多數(shù)機(jī)器人的感覺(jué)能力很有限,舉例來(lái)說(shuō),如果裝配零件并不以精確的、重復(fù)的方式交給機(jī)器人,那么,機(jī)器人簡(jiǎn)直不能做它的工作,因?yàn)樗荒苷业讲⒛闷疬@個(gè)零件。如果碰巧有一物體進(jìn)入機(jī)器的工作場(chǎng)所,機(jī)器人就可能與它碰撞,對(duì)雙方都造成破壞,因?yàn)闄C(jī)器人并不把它看作障礙物。換句話說(shuō),現(xiàn)存的機(jī)器人與智能相差甚遠(yuǎn),而且它們不能聽(tīng)、看和觸摸(感知在它的手里握住的或在附近的物體的力、形狀、表面紋理結(jié)構(gòu)、溫度、重量、距離和速度)。請(qǐng)想象一下,要教會(huì)一位具有這一切缺陷的人去做精確的裝配操作是多么困難?。∵@就說(shuō)明我們?yōu)槭裁葱枰悄軝C(jī)器人。
智能機(jī)器人的開(kāi)發(fā)在機(jī)器人學(xué)的應(yīng)用方面開(kāi)辟了一個(gè)嶄新的階段。智能機(jī)器人必須有能力感知它周圍的環(huán)境,并有足夠的智慧對(duì)環(huán)境的變化作出反應(yīng),就像我們做的那樣。這樣一種能力要求直接應(yīng)用傳感器感知和人工智能。大部分的機(jī)器人學(xué)方面的研究曾經(jīng)是,現(xiàn)在仍然是關(guān)于如何用視覺(jué)傳感器——“眼睛”和觸覺(jué)傳感器——“手指”裝配機(jī)器人。人工智能將使機(jī)器人對(duì)它的工作任務(wù)和環(huán)境變化作出反應(yīng)并能適應(yīng)之,而且要對(duì)這些變化作出推理,作出決定。根據(jù)這些要求,智能機(jī)器人可描述為帶有各種各樣傳感器,具有復(fù)雜的信號(hào)處理能力和作出決定的智慧,此外,還要有一般的活動(dòng)能力。
人類在做智能工作時(shí)顯示出一種重要能力就是他們能從周圍環(huán)境中接受感覺(jué)信息。人類的感觀提供了種類繁多的感覺(jué)輸入,對(duì)完成許多活動(dòng)至關(guān)重要。為了做出智能機(jī)器人,已作了很大努力去模擬類似的感覺(jué)能力。其中,視覺(jué)是最重要的感覺(jué)。據(jù)估計(jì),多達(dá)80%的感覺(jué)信息是是由視覺(jué)接受的。以各種方式利用成像傳感器可以使機(jī)器人獲得視力。為了改善工作的精度性,視力被用來(lái)幫人們精確地調(diào)整機(jī)器人的手,用的是帶視覺(jué)傳感器的光學(xué)反饋控制。確定要揀起來(lái)的零件的位置和方向并辨認(rèn)它們是視覺(jué)的另一重要用途。舉例來(lái)說(shuō),要指導(dǎo)一臺(tái)縫焊機(jī)器把兩件零件排在一起并焊起來(lái),就必須由視覺(jué)。不論何種應(yīng)用,視覺(jué)系統(tǒng)都包括照明光源、成像傳感器、圖像數(shù)字化器和一臺(tái)系統(tǒng)控制用計(jì)算機(jī)。
若把周圍的光線用作照明光源,這一成像過(guò)程就是被動(dòng)型的。這一類型的成像常用于軍事用途,因?yàn)橛^察者的位置不受觀察者控制;但在工業(yè)應(yīng)用中,可以盡可能自由地設(shè)置照明光源或主動(dòng)成像。
機(jī)器人視覺(jué)系統(tǒng)的圖像傳感器定義為電子光學(xué)設(shè)備。它能把光學(xué)信號(hào)轉(zhuǎn)變成視頻信號(hào)。視覺(jué)傳感器通常或是電視攝像機(jī)或是固態(tài)圖像器件,例如,電荷耦合器件(CCD),后一種器件靈敏度較高、壽命長(zhǎng)、重量輕,因此,與電視攝像機(jī)比較,更受歡迎。
攝像機(jī)系統(tǒng)不僅包括攝像機(jī)的成像器,而且非常重要的是一組透鏡系統(tǒng)。這些透鏡確定視場(chǎng)、景深和其它光學(xué)因素,直接影響攝像機(jī)攝取圖像的質(zhì)量。一些新穎的技術(shù),如魚眼透鏡,不必經(jīng)過(guò)聚焦調(diào)整就可以獲得360度的視場(chǎng)。最近的調(diào)查表明,它在活動(dòng)機(jī)器人上非常有用。
無(wú)論是電視攝像機(jī),還是CCD,都在每一個(gè)像素上產(chǎn)生與它的光強(qiáng)的成比例的模擬值,于是組成一幅圖像。為了使數(shù)字計(jì)算機(jī)可以處理這一模擬信號(hào),當(dāng)攝像機(jī)掃描一場(chǎng)景時(shí),它的輸出必須利用模擬-數(shù)字轉(zhuǎn)換器(A/D)轉(zhuǎn)換成數(shù)字碼,并存儲(chǔ)在安在攝像機(jī)內(nèi)部的隨機(jī)存儲(chǔ)器(RAM)內(nèi)。數(shù)字計(jì)算機(jī)就可以分析數(shù)字化圖像,并抽取這樣的一些圖像信息,如棱、區(qū)域、邊界和圖像中物體的顏色及表面紋理結(jié)構(gòu)。最后,計(jì)算機(jī)根據(jù)有關(guān)場(chǎng)景的知識(shí)對(duì)圖像所代表的東西作出說(shuō)明和理解,然后給機(jī)器人一幅用符號(hào)描述的環(huán)境圖像。
在重要性上僅次于視覺(jué)的是觸覺(jué)。請(qǐng)想象一下,盲人靠他們的觸覺(jué)能做精細(xì)活兒。采用觸覺(jué)的盲機(jī)器人能機(jī)器有效地做裝配工作。要牢牢地抓住物體而又不致弄壞它,觸覺(jué)是特別重要的,為的是提供反饋。
為了模擬人手的觸覺(jué),完整的觸覺(jué)系統(tǒng)必須完成三種基本觸覺(jué)操作:(1)關(guān)節(jié)力受感——感受施加在機(jī)器人手上、腕上和關(guān)節(jié)上的力。(2)觸覺(jué)感受——感覺(jué)施加在手的表面或抓取器表面上各個(gè)點(diǎn)的壓力。(3)滑動(dòng)感受——感受被握物體的任何運(yùn)動(dòng)。
安裝在機(jī)器人關(guān)節(jié)組合體中的各種應(yīng)變塊規(guī)通常用來(lái)測(cè)量關(guān)節(jié)力。應(yīng)變塊規(guī)就是力敏感元件,它的電阻按比例地隨著施加在它上面的力的大小而變化。
最簡(jiǎn)單的觸覺(jué)傳感器的應(yīng)用例子就是與小型開(kāi)關(guān)陣列裝備在一起的抓取器。這種類型的傳感器只能確定機(jī)器人的手上的一特定點(diǎn)或一些點(diǎn)的陣列是否又物體。較為先進(jìn)的一類觸覺(jué)傳感器使用壓敏的壓電材料(導(dǎo)電橡膠,導(dǎo)電泡沫等)做成的陣列。當(dāng)受到應(yīng)力時(shí),這種材料傳導(dǎo)電流。這種安排使傳感器能感覺(jué)到機(jī)器人手中的力和壓力的變化。觸覺(jué)傳感器系列矩陣大小是從8×8到80×80的二維陣列。由于每一點(diǎn)上的力可以確定,機(jī)器人手面上的力就可以詳細(xì)地表示出,握在機(jī)器人手中的東西形狀也就可以分別確定。這些力的數(shù)據(jù)可以用來(lái)在一部像電視那樣的熒屏上顯示物體的形狀以及在手的表面上力的分布狀況。
機(jī)器人要求感受滑動(dòng)是為了使施加在纖細(xì)的、脆弱的物體上的握力的大小保持最佳。這種能力防止弄壞該物體又允許該物體被抓起而沒(méi)有被破壞的危險(xiǎn)。用來(lái)檢測(cè)滑動(dòng)的方法基于感受物體與抓取器之間任何震動(dòng)或任何運(yùn)動(dòng),不管它們是多么難以察覺(jué)。抓力一步一步地增大直到該物體被牢牢抓住,再也不出現(xiàn)滑動(dòng)為止。
觸覺(jué)和視覺(jué)的結(jié)合大大地增強(qiáng)了機(jī)器人干裝配工作的適應(yīng)能力。這種類型傳感器的一個(gè)例子就是一只視覺(jué)傳感器用來(lái)給物體定位并辨認(rèn)它們,確定機(jī)器人本身的位置;與之結(jié)合的是一只觸覺(jué)傳感器,用來(lái)探測(cè)力和壓力分布狀況,確定力矩、重量、質(zhì)量中心和它所處理的材料容讓型。在工業(yè)界,這種手-眼協(xié)調(diào)對(duì)一般的操作是及其有利的。
對(duì)機(jī)器人來(lái)說(shuō),另一種重要的感知就是有關(guān)距離或深度的信息。對(duì)機(jī)器人來(lái)說(shuō),距
收藏