【溫馨提示】====【1】設(shè)計(jì)包含CAD圖紙 和 DOC文檔,均可以在線預(yù)覽,所見(jiàn)即所得,,dwg后綴的文件為CAD圖,超高清,可編輯,無(wú)任何水印,,充值下載得到【資源目錄】里展示的所有文件======【2】若題目上備注三維,則表示文件里包含三維源文件,由于三維組成零件數(shù)量較多,為保證預(yù)覽的簡(jiǎn)潔性,店家將三維文件夾進(jìn)行了打包。三維預(yù)覽圖,均為店主電腦打開(kāi)軟件進(jìn)行截圖的,保證能夠打開(kāi),下載后解壓即可。======【3】特價(jià)促銷(xiāo),,拼團(tuán)購(gòu)買(mǎi),,均有不同程度的打折優(yōu)惠,,詳情可咨詢(xún)QQ:1304139763 或者 414951605======【4】 題目最后的備注【JS系列】為店主整理分類(lèi)的代號(hào),與課題內(nèi)容無(wú)關(guān),請(qǐng)忽視
河南理工大學(xué)萬(wàn)方科技學(xué)院
本科畢業(yè)設(shè)計(jì)(論文)中期檢查表
指導(dǎo)教師: 鄧樂(lè) 職稱(chēng):
所在院(系):機(jī)械與動(dòng)力工程學(xué)院 教研室(研究室):
題 目
四連桿履帶式搜救機(jī)器人
學(xué)生姓名
錢(qián)龍飛
專(zhuān)業(yè)班級(jí)
08機(jī)設(shè)2班
學(xué)號(hào)
0828070150
一、選題質(zhì)量:(主要從以下四個(gè)方面填寫(xiě):1、選題是否符合專(zhuān)業(yè)培養(yǎng)目標(biāo),能否體現(xiàn)綜合訓(xùn)練要求;2、題目難易程度;3、題目工作量;4、題目與生產(chǎn)、科研、經(jīng)濟(jì)、社會(huì)、文化及實(shí)驗(yàn)室建設(shè)等實(shí)際的結(jié)合程度)
所選的題目與書(shū)本學(xué)習(xí)知識(shí)聯(lián)系緊密,比較貼近生產(chǎn)實(shí)際情況,比較具有代表性;
具有非常大的發(fā)揮空間和巧活多樣的設(shè)計(jì)思路,對(duì)于本科機(jī)設(shè)專(zhuān)業(yè)的學(xué)生來(lái)說(shuō),題目難
相對(duì)容易,主要是進(jìn)行四連桿履帶式搜救機(jī)器人的結(jié)構(gòu)設(shè)計(jì)與越障能力分析,這主要
是針對(duì)機(jī)械設(shè)計(jì)學(xué)生的要求,其機(jī)器人的控制部分還需與計(jì)算機(jī),通信,電子等專(zhuān)業(yè)
密切聯(lián)系。四連桿履帶式搜救機(jī)器人的結(jié)構(gòu)設(shè)計(jì)主要是機(jī)器人的外形尺寸設(shè)計(jì),內(nèi)部
零部件安置方式以及外部翻越行動(dòng)方式,此機(jī)器人采用四連桿履帶式,主要由機(jī)架個(gè)對(duì)稱(chēng)分布的履帶變形模塊組成。機(jī)架兩側(cè)是基于平行四邊形結(jié)構(gòu)的履帶變形模塊,主要由四連桿變形機(jī)構(gòu),主驅(qū)動(dòng)輪,被動(dòng)輪及繞在履帶輪上的履帶組成,其中四連桿變形機(jī)構(gòu)
由連桿,主動(dòng)曲柄,被動(dòng)曲柄組成,用于提供驅(qū)動(dòng)力,并且可以繞機(jī)架旋轉(zhuǎn),實(shí)現(xiàn)履帶變形,在越障時(shí)給機(jī)器人提供額外的輔助運(yùn)動(dòng)。選題完全符合專(zhuān)業(yè)培養(yǎng)目標(biāo),屬于機(jī)械設(shè)計(jì)的一種,對(duì)即將畢業(yè)的學(xué)生的再學(xué)習(xí)有著較好的指引作用, 不僅僅局限在機(jī)械基礎(chǔ)知識(shí)上更涉及了有關(guān)材料學(xué)、力學(xué)等多學(xué)科知識(shí),使我們對(duì)交叉學(xué)科有了一定的涉足,綜合訓(xùn)練的要求也得到充分的體現(xiàn)。
二、開(kāi)題報(bào)告完成情況:
開(kāi)題報(bào)告已經(jīng)完成。
從適合實(shí)際工作環(huán)境出發(fā),確定了明確的課題設(shè)計(jì)方向;對(duì)四連桿履帶式搜救機(jī)器人
已經(jīng)有了一定的認(rèn)識(shí)了解。已經(jīng)對(duì)課題進(jìn)行了設(shè)計(jì)、分析,并有了突破性的進(jìn)展。同時(shí),
已完成了對(duì)相關(guān)資料的查閱,對(duì)課題有了總體的分析,開(kāi)題報(bào)告完成質(zhì)量相對(duì)較高。
三、階段性成果:
1、本次設(shè)計(jì)的開(kāi)題報(bào)告已經(jīng)完成,總體布置方案和主要結(jié)構(gòu)參數(shù)已確定,并完成一些標(biāo)準(zhǔn)件的選型及和大多數(shù)零部件的設(shè)計(jì)計(jì)算工作。
2、部分零件圖的繪制已經(jīng)基本完成,設(shè)計(jì)說(shuō)明書(shū)已經(jīng)開(kāi)始整理。
3、英文翻譯工作已經(jīng)基本完成,現(xiàn)在正對(duì)一些結(jié)構(gòu)設(shè)計(jì)進(jìn)行校核。
四、存在主要問(wèn)題:
由于專(zhuān)業(yè)基礎(chǔ)知識(shí)學(xué)習(xí)不夠深入,設(shè)計(jì)經(jīng)驗(yàn)欠缺,參考資料收集有限,設(shè)計(jì)主題思路把握不夠,簡(jiǎn)單問(wèn)題解決不夠靈活;設(shè)計(jì)中結(jié)構(gòu)較復(fù)雜,機(jī)器人越障能力分析有一定的難度,
數(shù)據(jù)分析與變形草圖的繪制綜合分析機(jī)器人順利越過(guò)90度障礙物能力。同時(shí)機(jī)器人內(nèi)部
結(jié)構(gòu)中的電動(dòng)機(jī),減速器,齒輪設(shè)計(jì)等細(xì)節(jié)問(wèn)題的要求以及內(nèi)部結(jié)構(gòu)安排方式,如何使得
安排即合理又正確等問(wèn)題需要進(jìn)一步解決。
五、指導(dǎo)教師對(duì)學(xué)生在畢業(yè)實(shí)習(xí)中,勞動(dòng)、學(xué)習(xí)紀(jì)律及畢業(yè)設(shè)計(jì)(論文)進(jìn)展等方面的
評(píng)語(yǔ)
指導(dǎo)教師:
年 月 日
2
多自由度步行機(jī)器人
摘要
在現(xiàn)實(shí)生活中設(shè)計(jì)一款不僅可以倒下而且還可以站起來(lái)的機(jī)器人靈活智能機(jī)器人很重要。本文提出了一種兩臂兩足機(jī)器人,即一個(gè)模仿機(jī)器人,它可以步行、滾動(dòng)和站起來(lái)。該機(jī)器人由一個(gè)頭,兩個(gè)胳膊和兩條腿組成?;谶h(yuǎn)程控制,設(shè)計(jì)了雙足機(jī)器人的控制系統(tǒng),解決了機(jī)器人大腦內(nèi)的機(jī)構(gòu)無(wú)法與無(wú)線電聯(lián)系的問(wèn)題。這種遠(yuǎn)程控制使機(jī)器人具有強(qiáng)大的計(jì)算頭腦和有多個(gè)關(guān)節(jié)輕盈的身體。該機(jī)器人能夠保持平衡并長(zhǎng)期使用跟蹤視覺(jué),通過(guò)一組垂直傳感器檢測(cè)是否跌倒,并通過(guò)兩個(gè)手臂和兩條腿履行起立動(dòng)作。用實(shí)際例子對(duì)所開(kāi)發(fā)的系統(tǒng)和實(shí)驗(yàn)結(jié)果進(jìn) 行了描述。
1 引言
隨著人類(lèi)兒童的娛樂(lè),對(duì)于設(shè)計(jì)的雙足運(yùn)動(dòng)的機(jī)器人具有有站起來(lái)動(dòng)作的能 力是必不可少。
為了建立一個(gè)可以實(shí)現(xiàn)兩足自動(dòng)步行的機(jī)器人,設(shè)計(jì)中感知是站 立還是否躺著的傳感器必不可少。兩足步行機(jī)器人它主要集中在動(dòng)態(tài)步行,作為一種先進(jìn)的控制問(wèn)題來(lái)對(duì)待它。然而,在現(xiàn)實(shí)世界中把注意力集中在智能反應(yīng),更重要的是創(chuàng)想,而不是一個(gè)不會(huì)倒下的機(jī)器人,是一個(gè)倒下來(lái)可以站起來(lái)的機(jī)器人。
為了建立一個(gè)既能倒下又能站起來(lái)的機(jī)器人,機(jī)器人需要傳感系統(tǒng)就要知道 它是否跌倒或沒(méi)有跌倒。雖然視覺(jué)是一個(gè)機(jī)器人最重要的遙感功能,但由于視覺(jué) 系統(tǒng)規(guī)模和實(shí)力的限制,建立一個(gè)強(qiáng)大的視覺(jué)系統(tǒng)在機(jī)器人自己的身體上是困難 的。如果我們想進(jìn)一步要求動(dòng)態(tài)反應(yīng)和智能推理經(jīng)驗(yàn)的基礎(chǔ)上基于視覺(jué)的機(jī)器人 行為研究,那么機(jī)器人機(jī)構(gòu)要輕巧足以夠迅速作出迅速反應(yīng),并有許多自由度為 了顯示驅(qū)動(dòng)各種智能行為。至于有腿機(jī)器人,只有一個(gè)以視覺(jué)為基礎(chǔ)的小小 的研究。面臨的困難是在基于視覺(jué)有腿機(jī)器人實(shí)驗(yàn)研究上由硬件的顯示所限制。在有限的硬件基礎(chǔ)上是很難繼續(xù)發(fā)展先進(jìn)的視覺(jué)軟件。為了解決這些問(wèn)題和推進(jìn) 基于視覺(jué)的行為研究,可以通過(guò)建立遠(yuǎn)程腦的辦法。身體和大腦相連的無(wú)線鏈路 使用無(wú)線照相機(jī)和遠(yuǎn)程控制機(jī)器人,因?yàn)闄C(jī)體并不需要電腦板,所以它變得更加 容易建立一個(gè)有許多自由度驅(qū)動(dòng)的輕盈機(jī)身。
在這項(xiàng)研究中,我們制定了一個(gè)使用遠(yuǎn)程腦機(jī)器人的環(huán)境并且使它執(zhí)行平衡的視覺(jué)和起立的手扶兩足機(jī)器人,通過(guò)胳膊和腿的合作,該系統(tǒng)和實(shí)驗(yàn)結(jié)果說(shuō)明如下。
圖 1 遠(yuǎn)程腦系統(tǒng)的硬件配置
圖 2 兩組機(jī)器人的身體結(jié)構(gòu)
2 遠(yuǎn)程腦系統(tǒng)
遠(yuǎn)程控制機(jī)器人不使用自己大腦內(nèi)的機(jī)構(gòu)。它留大腦在控制系統(tǒng)中并且與它用無(wú)線電聯(lián)系。這使我們能夠建立一個(gè)自由的身體和沉重大腦的機(jī)器人。身體和大腦的定義軟件和硬件之間連接的接口。身體是為了適應(yīng)每個(gè)研究項(xiàng)目和任務(wù)而設(shè)計(jì)的。這使我們提前進(jìn)行研究各種真實(shí)機(jī)器人系統(tǒng)。
一個(gè)主要利用遠(yuǎn)程腦機(jī)器人是基于超級(jí)并行計(jì)算機(jī)上有一個(gè)大型及重型顱腦。雖然硬件技術(shù)已經(jīng)先進(jìn)了并擁有生產(chǎn)功能強(qiáng)大的緊湊型視覺(jué)系統(tǒng)的規(guī)模,但是硬件仍然很大。攝像頭和視覺(jué)處理器的無(wú)線連接已經(jīng)成為一種研究工具。遠(yuǎn)程腦的做法使我們?cè)诨谝曈X(jué)機(jī)器人技術(shù)各種實(shí)驗(yàn)問(wèn)題的研究上取得進(jìn)展。
另一個(gè)遠(yuǎn)程腦的做法的優(yōu)點(diǎn)是機(jī)器人機(jī)體輕巧。這開(kāi)辟了與有腿移動(dòng)機(jī)器人合作的可能性。至于動(dòng)物,一個(gè)機(jī)器人有 4 個(gè)可以行走的四肢。我們的重點(diǎn)是基于視覺(jué)的適應(yīng)行為的4肢機(jī)器人、機(jī)械動(dòng)物,在外地進(jìn)行試驗(yàn)還沒(méi)有太多的研究。
大腦是提出的在母體環(huán)境中通過(guò)接代遺傳。大腦和母體可以分享新設(shè)計(jì)的機(jī)器人。一個(gè)開(kāi)發(fā)者利用環(huán)境可以集中精力在大腦的功能設(shè)計(jì)上。對(duì)于機(jī)器人的大腦被提出在一個(gè)母體的環(huán)境,它可以直接受益于母體的“演變”,也就是說(shuō)當(dāng)母體升級(jí)到一個(gè)更強(qiáng)大的計(jì)算機(jī)時(shí)該軟件容易獲得權(quán)利。
圖1顯示了遠(yuǎn)程腦系統(tǒng)由大腦基地,機(jī)器人的身體和大腦體界面組成。在遠(yuǎn) 程腦辦法中大腦和身體接觸面之間的設(shè)計(jì)和性能是關(guān)鍵。我們目前的執(zhí)行情況采 取了完全遠(yuǎn)程腦的辦法,這意味著該機(jī)體上沒(méi)有電腦芯片。目前系統(tǒng)由視覺(jué)子系 統(tǒng),非視覺(jué)傳感器子系統(tǒng)和運(yùn)動(dòng)控制子系統(tǒng)組成。一個(gè)障礙物可以從機(jī)器人機(jī)體 的攝像機(jī)上接收視頻信號(hào)。每個(gè)視覺(jué)子系統(tǒng)由平行放置的 8 個(gè)顯示板組成。一個(gè)機(jī)體僅有一個(gè)運(yùn)動(dòng)指令信號(hào)和傳輸傳感器的信號(hào)的接收器。該傳感器信息從視頻發(fā)射機(jī)傳輸。傳輸其他傳感器的信息是可能的,如觸摸和伺服錯(cuò)誤通過(guò)視頻 傳輸?shù)男盘?hào)整合成一個(gè)視頻圖像。該驅(qū)動(dòng)器是包括一個(gè)模擬伺服電路和接收安置器的連接模塊。離子參考價(jià)值來(lái)自于動(dòng)作接收器。該動(dòng)作控制子系統(tǒng)可以通過(guò)13個(gè)波段處理多達(dá)104個(gè)驅(qū)動(dòng)器和每20兆秒發(fā)送參考價(jià)值的所有驅(qū)動(dòng)器。
3兩個(gè)手和足的機(jī)器人
圖2顯示了兩個(gè)手和足的機(jī)器人的結(jié)構(gòu)。機(jī)器人的主要電力組成部分是連接 著伺服驅(qū)動(dòng)器控、制信號(hào)接收器定位傳感器,發(fā)射機(jī),電池驅(qū)動(dòng)器,傳感器和一個(gè)攝像頭,視頻發(fā)射機(jī),沒(méi)有電腦板。伺服驅(qū)動(dòng)器包括一個(gè)齒輪傳動(dòng)電動(dòng)機(jī)和伺 服電路模擬的方塊??刂菩盘?hào)給每個(gè)伺服模塊的位置參考。扭矩伺服模塊可覆蓋 2Kgcm -1 4Kgcm 的速度約 0 .2sec/60deg??刂菩盘?hào)傳輸無(wú)線電路編碼的8個(gè)參考值。該機(jī)器人在圖 2 中有兩個(gè)接收器模塊在芯片上以控制16個(gè)驅(qū)動(dòng)器。
圖3說(shuō)明了方向傳感器使用了一套垂直開(kāi)關(guān)。垂直開(kāi)關(guān)是水銀開(kāi)關(guān)。當(dāng)水銀 開(kāi)關(guān)(a)是傾斜時(shí),下拉關(guān)閉的汞之間接觸的兩個(gè)電極。方向傳感器安裝兩個(gè)汞開(kāi)關(guān),如圖顯示在(b)項(xiàng)。該交換機(jī)提供了兩個(gè)比特信號(hào)用來(lái)檢測(cè) 4 個(gè)方向的傳 感器如圖所示在(c)項(xiàng)。該機(jī)器人具有在其胸部的傳感器并且它可以區(qū)分四個(gè)方向:面朝上,面朝下,站立和顛倒。
該機(jī)體的結(jié)構(gòu)設(shè)計(jì)和模擬在母親環(huán)境下。該機(jī)體的運(yùn)動(dòng)學(xué)模型是被描述面向一個(gè)口齒不清的對(duì)象,這使我們能夠描述幾何實(shí)體模型和窗口界面設(shè)計(jì)的行為。
圖 3 傳感器的兩個(gè)水銀定位開(kāi)關(guān)
圖4顯示遠(yuǎn)程腦機(jī)器人的一些環(huán)境項(xiàng)目分類(lèi)。這些分類(lèi)為擴(kuò)大發(fā)展各種機(jī)器人提供了豐富的平臺(tái)。
4基于視覺(jué)的平衡
該機(jī)器人可以用兩條腿站起來(lái)。因?yàn)樗梢愿淖儥C(jī)體的重心,通過(guò)控制踝關(guān)節(jié)的角度,它可以進(jìn)行靜態(tài)的兩足行走。如果地面不平整或不穩(wěn)定,在靜態(tài)步行期間機(jī)器人必需控制她的身體平衡。
為了視覺(jué)平衡和保持移動(dòng)平穩(wěn),它要有高速的視覺(jué)系統(tǒng)。我們已經(jīng)用相關(guān)的芯片制定了一項(xiàng)跟蹤視覺(jué)板。這個(gè)視覺(jué)板由帶著特別 LSI 芯片(電位:運(yùn)動(dòng) 估計(jì)處理器)擴(kuò)張轉(zhuǎn)換器組成 ,與執(zhí)行本地圖像塊匹配。
圖 4 層次分類(lèi)
圖5 步行步態(tài)
該輸入處理器是作為參考程序塊和一個(gè)圖像搜索窗口形象。該大小的參考程序塊可達(dá)16*16像素。該大小的搜索窗口取決于參考?jí)K的大小通常高達(dá) 32*32 像素,以便它能夠包括 16 * 16且匹配。該處理器計(jì)算價(jià)值 256 薩赫勒(總和絕對(duì) 差)之間的參考?jí)K和 256 塊在搜索窗口,還找到最佳匹配塊,這就是其中的最低薩赫勒價(jià)值。
當(dāng)目標(biāo)平移時(shí)塊匹配是非常有力的。然而,普通的塊匹配方法當(dāng)它旋轉(zhuǎn)時(shí)無(wú)法跟蹤目標(biāo)。為了克服這一困難,我們開(kāi)發(fā)了一種新方法,跟隨真正旋轉(zhuǎn)目標(biāo)的 候選模板。旋轉(zhuǎn)模板法首先生成所有目標(biāo)圖像旋轉(zhuǎn),并且?guī)讉€(gè)足夠的候選參考模 板被選擇并跟蹤前面圖的場(chǎng)景相匹配。圖 5 展示了一個(gè)平衡實(shí)驗(yàn)。在這個(gè)實(shí)驗(yàn)中 機(jī)器人站在傾斜的木板上。機(jī)器人視覺(jué)跟蹤著前面的場(chǎng)景。它會(huì)記住一個(gè)物體垂 直方向作為視覺(jué)跟蹤的參照并產(chǎn)生了旋轉(zhuǎn)圖像的參考圖象。如果視覺(jué)跟蹤的參考 對(duì)象使用旋轉(zhuǎn)圖像,它可以衡量身體旋轉(zhuǎn)。 為了保持身體平衡,機(jī)器人的反饋控 制其身體旋轉(zhuǎn)來(lái)控制中心機(jī)體的重心。旋轉(zhuǎn)視覺(jué)跟蹤可以跟蹤視頻圖像率。
圖 6 雙足步行
該輸入處理器是作為參考程序塊和一個(gè)圖像搜索窗口形象.該大小的參考程序塊 可達(dá) 16*16 像素.該大小的搜索窗口取決于參考?jí)K的大小通常高達(dá) 32*32 像素,以便它能夠包括16 * 16且匹配。該處理器計(jì)算價(jià)值 256 薩赫勒(總和絕對(duì)差)之間的參考?jí)K和256塊在搜索窗口,還找到最佳匹配塊,這就是其中的最低薩赫勒價(jià)值。
當(dāng)目標(biāo)平移時(shí)塊匹配是非常有力的。然而,普通的塊匹配方法當(dāng)它旋轉(zhuǎn)時(shí)無(wú)法跟蹤目標(biāo)。為了克服這一困難,我們開(kāi)發(fā)了一種新方法,跟隨真正旋轉(zhuǎn)目標(biāo)的候選模板。旋轉(zhuǎn)模板法首先生成所有目標(biāo)圖像旋轉(zhuǎn),并且?guī)讉€(gè)足夠的候選參考模板被選擇并跟蹤前面圖的場(chǎng)景相匹配。
圖5展示了一個(gè)平衡實(shí)驗(yàn)。在這個(gè)實(shí)驗(yàn)中機(jī)器人站在傾斜的木板上。機(jī)器人視覺(jué)跟蹤著前面的場(chǎng)景。它會(huì)記住一個(gè)物體垂直方向作為視覺(jué)跟蹤的參照并產(chǎn)生 了旋轉(zhuǎn)圖像的參考圖象。如果視覺(jué)跟蹤的參考對(duì)象使用旋轉(zhuǎn)圖像,它可以衡量身 體旋轉(zhuǎn)。為了保持身體平衡,機(jī)器人的反饋控制其身體旋轉(zhuǎn)來(lái)控制中心機(jī)體的重心。旋轉(zhuǎn)視覺(jué)跟蹤可以跟蹤視頻圖像率。
圖 7 雙足步行實(shí)驗(yàn)
5 雙足步行
如果一個(gè)雙足機(jī)器人可以自由的控制機(jī)器人的重心,它可以執(zhí)行雙足行走。 展示在圖7的機(jī)器人在腳踝的位置有以左和以右的角度,它可以在特定的方式下 執(zhí)行雙足行走。該一個(gè)周期的一系列運(yùn)動(dòng)由八個(gè)階段組成,如圖6所示。一個(gè)步驟包括四個(gè)階段:移動(dòng)腳的重力中心,抬腿,向前移動(dòng),換腿。由于身體被描述用 實(shí)體模型,根據(jù)重心參數(shù)機(jī)器人可以產(chǎn)生一個(gè)機(jī)構(gòu)配置移動(dòng)重力中心。這一運(yùn)動(dòng)后,機(jī)器人可以抬起另一條腿并且向前走。在抬腿過(guò)程中機(jī)器人必須操縱機(jī)構(gòu)配置,以保持支持腳上的重心。依賴(lài)于重心的高度作為平衡的穩(wěn)定性,機(jī)器人選擇 合適的膝蓋角度.圖7顯示了一系列雙足機(jī)器人行走的實(shí)驗(yàn)。
6 滾動(dòng)和站立
圖8顯示了一系列滾動(dòng),坐著和站起來(lái)的動(dòng)作。這個(gè)動(dòng)作要求胳膊和腿之間的協(xié)調(diào)。由于步行機(jī)器人有一個(gè)電池,該機(jī)器人可使用電池的重量做翻轉(zhuǎn)動(dòng)作。當(dāng)機(jī)器人抬起左腿,向后移動(dòng)左臂且右臂向前,它可以得到機(jī)體周?chē)男D(zhuǎn)力矩。如果身體開(kāi)始轉(zhuǎn)動(dòng),右腿向后移動(dòng)并且左腳依賴(lài)臉部返回原來(lái)位置。翻滾運(yùn)動(dòng)身體的變化方向從仰視到俯視。它可通過(guò)方向傳感器核查。得到正面朝下的方向后,向下移動(dòng)機(jī)器人的手臂以坐在兩個(gè)腳上。這個(gè)動(dòng)作引起了雙手和地面之間的滑動(dòng)。如果手臂的長(zhǎng)度不夠達(dá)到在腳上的身體重心,這個(gè)坐的運(yùn)動(dòng)要求有手臂來(lái)推動(dòng)運(yùn)動(dòng)。站立運(yùn)動(dòng)是被控制的,以保持平衡。
圖 8 一系列滾動(dòng)和站立運(yùn)動(dòng)
7 通過(guò)集成傳感器網(wǎng)絡(luò)轉(zhuǎn)型的綜合
為了使上述描述的基本動(dòng)作成為一體,我們通過(guò)一種方法來(lái)描述一種被認(rèn)為是根據(jù)傳感器狀況的網(wǎng)絡(luò)轉(zhuǎn)型。圖9顯示了綜合了基本動(dòng)作機(jī)器人的狀態(tài)轉(zhuǎn)移圖: 兩足行走,滾動(dòng),坐著和站立。這種一體化提供了機(jī)器人保持行走甚至跌倒時(shí)的能力。普通的雙足行走是由兩步組成,連續(xù)的左腿在前和右腿在前。這個(gè)姿勢(shì)依賴(lài)于背部和“臉部”和“站立”是一樣的。也就是說(shuō),機(jī)器人的機(jī)體形狀是相同的,但方向是不同的。
該機(jī)器人可以探測(cè)機(jī)器人是否依賴(lài)于背部或面部使用方向傳感器。當(dāng)機(jī)器人發(fā)覺(jué)跌倒時(shí),它改變了依賴(lài)于背部或腹部通過(guò)移動(dòng)不確定姿勢(shì)的狀況。如果機(jī)器人依賴(lài)于背部起來(lái),一系列的動(dòng)作將被計(jì)劃執(zhí)行:翻轉(zhuǎn)、坐下和站立動(dòng)作。如果這種情況是依賴(lài)于臉部,它不執(zhí)行翻轉(zhuǎn)而是移動(dòng)手臂執(zhí)行坐的動(dòng)作。
8結(jié)束語(yǔ)
本文提出了一個(gè)兩手臂的可以執(zhí)行靜態(tài)雙足行走,翻轉(zhuǎn)和站立動(dòng)作的機(jī)器人。 建立這種行為的關(guān)鍵是遠(yuǎn)程腦方法。正如實(shí)驗(yàn)表明,無(wú)線技術(shù)允許機(jī)體自由移動(dòng)。這似乎也改變我們概念化機(jī)器人的一種方式。在我們的實(shí)驗(yàn)室已經(jīng)發(fā)展一種新的 研究環(huán)境,更適合于機(jī)器人和真實(shí)世界的人工智能。
這里提出的機(jī)器人是一個(gè)有腿的機(jī)器人。我們的視覺(jué)系統(tǒng)是基于高速塊匹配功能實(shí)施大規(guī)模集成電路的運(yùn)動(dòng)估算。視覺(jué)系統(tǒng)提供了與人交往作用的機(jī)體活力和適應(yīng)能力。機(jī)械狗表現(xiàn)出建立在跟蹤測(cè)距的基礎(chǔ)上的適應(yīng)行為。機(jī)械類(lèi)人猿已經(jīng)表明跟蹤和記憶的視覺(jué)功能和它們?cè)诨?dòng)行為上的綜合。
一個(gè)兩手臂機(jī)器人的研究為智能機(jī)器人研究提供了一個(gè)新的領(lǐng)域。因?yàn)樗母鞣N行為可能造成一個(gè)靈活的機(jī)體。遠(yuǎn)程腦方法將支持以學(xué)習(xí)為基礎(chǔ)行為的研 究領(lǐng)域。下一個(gè)研究任務(wù)包括:如何借鑒人類(lèi)行為以及如何讓機(jī)器人提高自身的學(xué)術(shù)行為。
Multi-degree of freedom walking robot
Masayuki INABA, Fumio KANEHIRO
Satoshi KAGAMI, Hirochika INOUE
Department of Mechano-Informatics
The University of Tokyo
7-3-l Hongo, Bunkyo-ku, 113 Tokyo, JAPAN
Abstract
Focusing attention on flexibility and intelligent reactivity in the real world, it is more important to build, not a robot that won’t fall down, but a robot that can get up if it does full down. This paper presents a research on a two-armed bipedal robot, an apelike robot, which can perform biped walking, rolling over and standing up. The robot consists of a head, two arms, and two legs. The control system of the biped robot is designed based on the remote-brained approach in which a robot does not bring its own brain within the body and talks with it by radio links. This remote-brained approach enables a robot to have both a heavy brain with powerful computation and a lightweight body with multiple joints. The robot can keep balance in standing using tracking vision, detect whether it falls down or not by a set of vertical sensors, and perform getting up motion colaborating two arms and two legs. The developed system and experimental results are described with illustrated real examples.
1 Introduction
As human children show, it is indispensable to have capability of getting up motion in order to learn biped locomotion. In order to build a robot which tries to learn biped walking automatically, the body should be designed to have structures to support getting up as well as sensors to know whether it lays or not.
When a biped robot has arms, it can perform various behaviors as well as walking. Research on biped walking robots has presented with realization.It has mainly focused on the dynamics in walking,treating it as an advanced problem in control.However, focusing attention on the intelligent reactivity in the real world, it is more important to build, not a robot that won’t fall down, but a robot that can get up if it does fall down.
In order to build a robot that can get up if it falls down, the robot needs sensing system to keep the body balance and to know whether it falls down or not. Although vision is one of the most important sensing functions of a robot, it is hard to build a robot with a powerful vision system on its own body because of the size and power limitation of a vision system. If we want to advance research on vision-based robot behaviors requiring dynamic reactions and intelligent reasoning based on experience, the robot body has to be lightweight enough to react quickly and have many DOFS in actuation to show a variety of intelligent behaviors.
As for the legged robot ,there is only a little research on vision-based behaviors. The difficulties in advancing experimental research for vision-based legged robots are caused by the limitation of the vision hardware. It is hard to keep developing advanced vision software in limited hardware. In order to solve theproblems and advance the study of vision-based behaviors, we have adopted a new approach through building remote-brained robots. The body and the brain are connected by wireless links by using wireless cameras and remote-controlled actuators.As a robot body does not need computers on-board,it becomes easier to build a lightweight body with many DOFS in actuation.
In this research, we developed a two-armed bipedal robot using the remote-brained robot environment and made it to perform balancing based on vision and getting up through cooperating arms and legs. The system and experimental results are described below.
2 The Remote-Brained System
The remote-brained robot does not bring its own brain within the body. It leaves the brain in the mother environment and communicates with it by radio links. This allows us to build a robot with a free body and a heavy brain. The connection link between the body and the brain defines the interface between software and hardware. Bodies are designed to suit each research project and task. This enables us advance in performing research with a variety of real robot systems.
A major advantage of remote-brained robots is that the robot can have a large and heavy brain based on super parallel computers. Although hardware technology for vision has advanced and produced powerful compact vision systems, the size of the hardware is still large. Wireless connection between the camera and the vision processor has been a research tool. The remote-brained approach allows us to progress in the study of a variety of experimental issues in vision-based robotics.
Another advantage of remote-brained approach is that the robot bodies can be lightweight. This opens up the possibility of working with legged mobile robots. As with animals, if a robot has 4 limbs it can walk. We are focusing on vision-based adaptive behaviors of 4-limbed robots, mechanical animals, experimenting in a field as yet not much studied.
The brain is raised in the mother environment in-herited over generations. The brain and the mother environment can be shared with newly designed robots. A developer using the environment can concentrate on the functional design of a brain. For robots where the brain is raised in a mother environment, it can benefit directly from the mother’s ‘evolution’, meaning that the software gains power easily when the mother is upgraded to a more powerful computer.Figure 1 shows the configuration of the remote-brained system which consists of brain base, robot body and brain-body interface.
In the remote-brained approach the design and theperformance of the interface between brain and body is the key. Our current implementation adopts a fully remotely brained approach, which means the body has no computer onboard. Current system consists of the vision subsystems, the non-vision sensor subsystem and the motion control subsystem. A block can receive video signals from cameras on robot bodies. The vision subsystems are parallel sets each consisting of eight vision boards.
A body just has a receiver for motion instruction signals and a transmitter for sensor signals. The sensor information is transmitted from a video transmitter. It is possible to transmit other sensor information such as touch and servo error through the video transmitter by integrating the signals into a video image. The actuator is a geared module which includes an analog servo circuit and receives a position reference value from the motion receiver. The motion control subsystem can handle up to 104 actuators through 13 wave bands and send the reference values to all the actuators every 20msec.
3 The Two-Armed Bipedal Robot
Figure 2 shows the structure of the two-armed bipedal robot. The main electric components of the robot are joint servo actuators, control signal receivers, an orientation sensor with transmitter, a battery set for actuators and sensors sensor and a camera with video transmitter. There is no computer on-board. A servo actuator includes a geared motor and analog servo circuit in the box. The control signal to each servo module is position reference. The torque of servo modules available cover 2Kgcm - 14Kgcm with the speed about 0.2sec/60deg. The control signal transmitted on radio link encodes eight reference values. The robot in figure 2 has two receiver modules onboard to control 16 actuators.
Figure 3 explains the orientation sensor using a set of vertical switches. The vertical switch is a mercury switch. When the mercury switch (a) is tilted, the drop of mercury closes the contact between the two electrodes. The orientation sensor mount two mercury switches such as shown in (b). The switches provides two bits signal to detect four orientation of the sensor as shown in (c). The robot has this sensor at its chest and it can distinguish four orientation; face up, face down, standing and upside down.
The body structure is designed and simulated in the mother environment. The kinematic model of the body is described in an object-oriented lisp, Euslisp which has enabled us to describe the geometric solid model and window interface for behavior design.
Figure 4 shows some of the classes in the programming environent for remote-brained robot written in Euslisp. The hierachy in the classes provides us with rich facilities for extending development of various robots.
4 Vision-Based Balancing
The robot can stand up on two legs. As it can change the gravity center of its body by controling the ankle angles, it can perform static bipedal walks. During static walking the robot has to control its body balance if the ground is not flat and stable.
In order to perform vision-based balancing it is re-quired to have high speed vision system to keep ob-serving moving schene. We have developed a tracking vision board using a correlation chip. The vision board consists of a transputer augmented with a special LSI chip(MEP) : Motion Estimation Processor) which performs local image block matching.
The inputs to the processor MEP are an image as a reference block and an image for a search window.The size of the reference blsearch window depends on the size of the reference block is usually up to 32 by 32 pixels so that it can include 16 * 16 possible matches. The processor calculates 256 values of SAD (sum of absolute difference) between the reference block and 256 blocks in the search window and also finds the best matching block, that is, the one which has the minimum SAD value. Clock is up to 16 by 16 pixels.The size of the search window depends on the size of the reference block is usually up to 32 by 32 pixels so that it can include 16 * 16 possible matches. The processor calculates 256 values of SAD (sum of absolute difference) between the reference block and 256 blocks in the search window and also finds the best matching block, that is, the one which has the minimum SAD value.
Block matching is very powerful when the target moves only in translation. However, the ordinary block matching method cannot track the target when it rotates. In order to overcome this difficulty, we developed a new method which follows up the candidate templates to real rotation of the target. The rotated template method first generates all the rotated target images in advance, and several adequate candidates of the reference template are selected and matched is tracking the scene in the front view. It remembers the vertical orientation of an object as the reference for visual tracking and generates several rotated images of the reference image. If the vision tracks the reference object using the rotated images, it can measures the body rotation. In order to keep the body balance, the robot feedback controls its body rotation to control the center of the body gravity. The rotational visual tracker can track the image at video rate.
5 Biped Walking
If a bipedal robot can control the center of gravity freely, it can perform biped walk. As the robot shown in Figure 2 has the degrees to left and right directions at the ankle position, it can perform bipedal walking in static way.
The motion sequence of one cycle in biped walking consists of eight phases as shown in Figure 6. One step consists of four phases; move-gravity-center-on-foot,lift-leg, move-forward-leg, place-leg. As the body is described in solid model, the robot can generate a body configuration for move-gravity-center-on-foot according to the parameter of the hight of the gravity center. After this movement, the robot can lift the other leg and move it forward. In lifting leg, the robot has to control the configuration in order to keep the center of gravity above the supporting foot. As the stability in balance depends on the hight of the gravity center, the robot selects suitable angles of the knees.Figure 7 shows a sequence of experiments of the robot in biped walking
6 Rolling Over and Standing Up
Figure 8 shows the sequence of rolling over, sitting and standing up. This motion requires coordination between arms and legs.
As the robot foot consists of a battery, the robot can make use of the weight of the battery for the roll-over motion. When the robot throws up the left leg and moves the left arm back and the right arm forward, it can get rotary moment around the body. If the body starts turning, the right leg moves back and the left foot returns its position to lie on the face. This rollover motion changes the body orientation from face up to face down. It can be verified by the orientation sensor.
After getting face down orientation, the robot moves the arms down to sit on two feet. This motion causes slip movement between hands and the ground. If the length of the arm is not enough to carry the center of gravity of the body onto feet, this sitting motion requires dynamic pushing motion by arms. The standing motion is controlled in order to keep the balance.
7 Integration through Building Sensor-Based Transition Net
In order to integrate the basic actions described above, we adopted a method to describe a sensor-based transition network in which transition is considered according to sensor status. Figure 9 shows a state transition diagram of the robot which integrates basic actions: biped walking, rolling over, sitting, and standing up. This integration provides the robot with capability of keeping walking even when it falls down.
The ordinary biped walk is composed by taking two states, Left-leg Fore and Right-leg Fore, successively.The poses in ‘Lie on the Back’ and ‘Lie on the Face’are as same as one in ‘Stand’. That is, the shape ofthe robot body is same but the orientation is different.
The robot can detect whether the robot lies on the back or the face using the orientation sensor. When the robot detects falls down, it changes the state to ‘Lie on the Back’ or ‘Lie on the Front’ by moving to the neutral pose. If the robot gets up from ‘Lie on the Back’, the motion sequence is planned to execute Roll-over, Sit and Stand-up motions. If the state is ‘Lie on the Face’, it does not execute Roll-over but moves arms up to perform the sitting motion.
8 Concluding Remarks
This paper has presented a two-armed bipedal robot which can perform statically biped walk, rolling over and standing up motions. The key to build such behaviors is the remote-brained approach. As the experiments have shown, wireless technologies permit robot bodies free movement. It also seems to change the way we conceptualize robotics. In our laboratory it has enabled the development of a new research environment, better suited to robotics and real-world AI.
The robot presented here is a legged robot. As legged locomotion requires dynamic visual feedback control, its vision-based behaviors can prove the effectiveness of the vision system and the remote-brained system. Our vision system is based on high speed block matching functi