【溫馨提示】====【1】設(shè)計包含CAD圖紙 和 DOC文檔,均可以在線預(yù)覽,所見即所得,,dwg后綴的文件為CAD圖,超高清,可編輯,無任何水印,,充值下載得到【資源目錄】里展示的所有文件======【2】若題目上備注三維,則表示文件里包含三維源文件,由于三維組成零件數(shù)量較多,為保證預(yù)覽的簡潔性,店家將三維文件夾進行了打包。三維預(yù)覽圖,均為店主電腦打開軟件進行截圖的,保證能夠打開,下載后解壓即可。======【3】特價促銷,,拼團購買,,均有不同程度的打折優(yōu)惠,,詳情可咨詢QQ:1304139763 或者 414951605======【4】 題目最后的備注【JS系列】為店主整理分類的代號,與課題內(nèi)容無關(guān),請忽視
河南理工大學(xué)萬方科技學(xué)院
本科畢業(yè)設(shè)計(論文)中期檢查表
指導(dǎo)教師: 鄧樂 職稱:
所在院(系):機械與動力工程學(xué)院 教研室(研究室):
題 目
四連桿履帶式搜救機器人
學(xué)生姓名
錢龍飛
專業(yè)班級
08機設(shè)2班
學(xué)號
0828070150
一、選題質(zhì)量:(主要從以下四個方面填寫:1、選題是否符合專業(yè)培養(yǎng)目標(biāo),能否體現(xiàn)綜合訓(xùn)練要求;2、題目難易程度;3、題目工作量;4、題目與生產(chǎn)、科研、經(jīng)濟、社會、文化及實驗室建設(shè)等實際的結(jié)合程度)
所選的題目與書本學(xué)習(xí)知識聯(lián)系緊密,比較貼近生產(chǎn)實際情況,比較具有代表性;
具有非常大的發(fā)揮空間和巧活多樣的設(shè)計思路,對于本科機設(shè)專業(yè)的學(xué)生來說,題目難
相對容易,主要是進行四連桿履帶式搜救機器人的結(jié)構(gòu)設(shè)計與越障能力分析,這主要
是針對機械設(shè)計學(xué)生的要求,其機器人的控制部分還需與計算機,通信,電子等專業(yè)
密切聯(lián)系。四連桿履帶式搜救機器人的結(jié)構(gòu)設(shè)計主要是機器人的外形尺寸設(shè)計,內(nèi)部
零部件安置方式以及外部翻越行動方式,此機器人采用四連桿履帶式,主要由機架個對稱分布的履帶變形模塊組成。機架兩側(cè)是基于平行四邊形結(jié)構(gòu)的履帶變形模塊,主要由四連桿變形機構(gòu),主驅(qū)動輪,被動輪及繞在履帶輪上的履帶組成,其中四連桿變形機構(gòu)
由連桿,主動曲柄,被動曲柄組成,用于提供驅(qū)動力,并且可以繞機架旋轉(zhuǎn),實現(xiàn)履帶變形,在越障時給機器人提供額外的輔助運動。選題完全符合專業(yè)培養(yǎng)目標(biāo),屬于機械設(shè)計的一種,對即將畢業(yè)的學(xué)生的再學(xué)習(xí)有著較好的指引作用, 不僅僅局限在機械基礎(chǔ)知識上更涉及了有關(guān)材料學(xué)、力學(xué)等多學(xué)科知識,使我們對交叉學(xué)科有了一定的涉足,綜合訓(xùn)練的要求也得到充分的體現(xiàn)。
二、開題報告完成情況:
開題報告已經(jīng)完成。
從適合實際工作環(huán)境出發(fā),確定了明確的課題設(shè)計方向;對四連桿履帶式搜救機器人
已經(jīng)有了一定的認(rèn)識了解。已經(jīng)對課題進行了設(shè)計、分析,并有了突破性的進展。同時,
已完成了對相關(guān)資料的查閱,對課題有了總體的分析,開題報告完成質(zhì)量相對較高。
三、階段性成果:
1、本次設(shè)計的開題報告已經(jīng)完成,總體布置方案和主要結(jié)構(gòu)參數(shù)已確定,并完成一些標(biāo)準(zhǔn)件的選型及和大多數(shù)零部件的設(shè)計計算工作。
2、部分零件圖的繪制已經(jīng)基本完成,設(shè)計說明書已經(jīng)開始整理。
3、英文翻譯工作已經(jīng)基本完成,現(xiàn)在正對一些結(jié)構(gòu)設(shè)計進行校核。
四、存在主要問題:
由于專業(yè)基礎(chǔ)知識學(xué)習(xí)不夠深入,設(shè)計經(jīng)驗欠缺,參考資料收集有限,設(shè)計主題思路把握不夠,簡單問題解決不夠靈活;設(shè)計中結(jié)構(gòu)較復(fù)雜,機器人越障能力分析有一定的難度,
數(shù)據(jù)分析與變形草圖的繪制綜合分析機器人順利越過90度障礙物能力。同時機器人內(nèi)部
結(jié)構(gòu)中的電動機,減速器,齒輪設(shè)計等細(xì)節(jié)問題的要求以及內(nèi)部結(jié)構(gòu)安排方式,如何使得
安排即合理又正確等問題需要進一步解決。
五、指導(dǎo)教師對學(xué)生在畢業(yè)實習(xí)中,勞動、學(xué)習(xí)紀(jì)律及畢業(yè)設(shè)計(論文)進展等方面的
評語
指導(dǎo)教師:
年 月 日
2
多自由度步行機器人
摘要
在現(xiàn)實生活中設(shè)計一款不僅可以倒下而且還可以站起來的機器人靈活智能機器人很重要。本文提出了一種兩臂兩足機器人,即一個模仿機器人,它可以步行、滾動和站起來。該機器人由一個頭,兩個胳膊和兩條腿組成。基于遠(yuǎn)程控制,設(shè)計了雙足機器人的控制系統(tǒng),解決了機器人大腦內(nèi)的機構(gòu)無法與無線電聯(lián)系的問題。這種遠(yuǎn)程控制使機器人具有強大的計算頭腦和有多個關(guān)節(jié)輕盈的身體。該機器人能夠保持平衡并長期使用跟蹤視覺,通過一組垂直傳感器檢測是否跌倒,并通過兩個手臂和兩條腿履行起立動作。用實際例子對所開發(fā)的系統(tǒng)和實驗結(jié)果進 行了描述。
1 引言
隨著人類兒童的娛樂,對于設(shè)計的雙足運動的機器人具有有站起來動作的能 力是必不可少。
為了建立一個可以實現(xiàn)兩足自動步行的機器人,設(shè)計中感知是站 立還是否躺著的傳感器必不可少。兩足步行機器人它主要集中在動態(tài)步行,作為一種先進的控制問題來對待它。然而,在現(xiàn)實世界中把注意力集中在智能反應(yīng),更重要的是創(chuàng)想,而不是一個不會倒下的機器人,是一個倒下來可以站起來的機器人。
為了建立一個既能倒下又能站起來的機器人,機器人需要傳感系統(tǒng)就要知道 它是否跌倒或沒有跌倒。雖然視覺是一個機器人最重要的遙感功能,但由于視覺 系統(tǒng)規(guī)模和實力的限制,建立一個強大的視覺系統(tǒng)在機器人自己的身體上是困難 的。如果我們想進一步要求動態(tài)反應(yīng)和智能推理經(jīng)驗的基礎(chǔ)上基于視覺的機器人 行為研究,那么機器人機構(gòu)要輕巧足以夠迅速作出迅速反應(yīng),并有許多自由度為 了顯示驅(qū)動各種智能行為。至于有腿機器人,只有一個以視覺為基礎(chǔ)的小小 的研究。面臨的困難是在基于視覺有腿機器人實驗研究上由硬件的顯示所限制。在有限的硬件基礎(chǔ)上是很難繼續(xù)發(fā)展先進的視覺軟件。為了解決這些問題和推進 基于視覺的行為研究,可以通過建立遠(yuǎn)程腦的辦法。身體和大腦相連的無線鏈路 使用無線照相機和遠(yuǎn)程控制機器人,因為機體并不需要電腦板,所以它變得更加 容易建立一個有許多自由度驅(qū)動的輕盈機身。
在這項研究中,我們制定了一個使用遠(yuǎn)程腦機器人的環(huán)境并且使它執(zhí)行平衡的視覺和起立的手扶兩足機器人,通過胳膊和腿的合作,該系統(tǒng)和實驗結(jié)果說明如下。
圖 1 遠(yuǎn)程腦系統(tǒng)的硬件配置
圖 2 兩組機器人的身體結(jié)構(gòu)
2 遠(yuǎn)程腦系統(tǒng)
遠(yuǎn)程控制機器人不使用自己大腦內(nèi)的機構(gòu)。它留大腦在控制系統(tǒng)中并且與它用無線電聯(lián)系。這使我們能夠建立一個自由的身體和沉重大腦的機器人。身體和大腦的定義軟件和硬件之間連接的接口。身體是為了適應(yīng)每個研究項目和任務(wù)而設(shè)計的。這使我們提前進行研究各種真實機器人系統(tǒng)。
一個主要利用遠(yuǎn)程腦機器人是基于超級并行計算機上有一個大型及重型顱腦。雖然硬件技術(shù)已經(jīng)先進了并擁有生產(chǎn)功能強大的緊湊型視覺系統(tǒng)的規(guī)模,但是硬件仍然很大。攝像頭和視覺處理器的無線連接已經(jīng)成為一種研究工具。遠(yuǎn)程腦的做法使我們在基于視覺機器人技術(shù)各種實驗問題的研究上取得進展。
另一個遠(yuǎn)程腦的做法的優(yōu)點是機器人機體輕巧。這開辟了與有腿移動機器人合作的可能性。至于動物,一個機器人有 4 個可以行走的四肢。我們的重點是基于視覺的適應(yīng)行為的4肢機器人、機械動物,在外地進行試驗還沒有太多的研究。
大腦是提出的在母體環(huán)境中通過接代遺傳。大腦和母體可以分享新設(shè)計的機器人。一個開發(fā)者利用環(huán)境可以集中精力在大腦的功能設(shè)計上。對于機器人的大腦被提出在一個母體的環(huán)境,它可以直接受益于母體的“演變”,也就是說當(dāng)母體升級到一個更強大的計算機時該軟件容易獲得權(quán)利。
圖1顯示了遠(yuǎn)程腦系統(tǒng)由大腦基地,機器人的身體和大腦體界面組成。在遠(yuǎn) 程腦辦法中大腦和身體接觸面之間的設(shè)計和性能是關(guān)鍵。我們目前的執(zhí)行情況采 取了完全遠(yuǎn)程腦的辦法,這意味著該機體上沒有電腦芯片。目前系統(tǒng)由視覺子系 統(tǒng),非視覺傳感器子系統(tǒng)和運動控制子系統(tǒng)組成。一個障礙物可以從機器人機體 的攝像機上接收視頻信號。每個視覺子系統(tǒng)由平行放置的 8 個顯示板組成。一個機體僅有一個運動指令信號和傳輸傳感器的信號的接收器。該傳感器信息從視頻發(fā)射機傳輸。傳輸其他傳感器的信息是可能的,如觸摸和伺服錯誤通過視頻 傳輸?shù)男盘栒铣梢粋€視頻圖像。該驅(qū)動器是包括一個模擬伺服電路和接收安置器的連接模塊。離子參考價值來自于動作接收器。該動作控制子系統(tǒng)可以通過13個波段處理多達104個驅(qū)動器和每20兆秒發(fā)送參考價值的所有驅(qū)動器。
3兩個手和足的機器人
圖2顯示了兩個手和足的機器人的結(jié)構(gòu)。機器人的主要電力組成部分是連接 著伺服驅(qū)動器控、制信號接收器定位傳感器,發(fā)射機,電池驅(qū)動器,傳感器和一個攝像頭,視頻發(fā)射機,沒有電腦板。伺服驅(qū)動器包括一個齒輪傳動電動機和伺 服電路模擬的方塊??刂菩盘柦o每個伺服模塊的位置參考。扭矩伺服模塊可覆蓋 2Kgcm -1 4Kgcm 的速度約 0 .2sec/60deg??刂菩盘杺鬏敓o線電路編碼的8個參考值。該機器人在圖 2 中有兩個接收器模塊在芯片上以控制16個驅(qū)動器。
圖3說明了方向傳感器使用了一套垂直開關(guān)。垂直開關(guān)是水銀開關(guān)。當(dāng)水銀 開關(guān)(a)是傾斜時,下拉關(guān)閉的汞之間接觸的兩個電極。方向傳感器安裝兩個汞開關(guān),如圖顯示在(b)項。該交換機提供了兩個比特信號用來檢測 4 個方向的傳 感器如圖所示在(c)項。該機器人具有在其胸部的傳感器并且它可以區(qū)分四個方向:面朝上,面朝下,站立和顛倒。
該機體的結(jié)構(gòu)設(shè)計和模擬在母親環(huán)境下。該機體的運動學(xué)模型是被描述面向一個口齒不清的對象,這使我們能夠描述幾何實體模型和窗口界面設(shè)計的行為。
圖 3 傳感器的兩個水銀定位開關(guān)
圖4顯示遠(yuǎn)程腦機器人的一些環(huán)境項目分類。這些分類為擴大發(fā)展各種機器人提供了豐富的平臺。
4基于視覺的平衡
該機器人可以用兩條腿站起來。因為它可以改變機體的重心,通過控制踝關(guān)節(jié)的角度,它可以進行靜態(tài)的兩足行走。如果地面不平整或不穩(wěn)定,在靜態(tài)步行期間機器人必需控制她的身體平衡。
為了視覺平衡和保持移動平穩(wěn),它要有高速的視覺系統(tǒng)。我們已經(jīng)用相關(guān)的芯片制定了一項跟蹤視覺板。這個視覺板由帶著特別 LSI 芯片(電位:運動 估計處理器)擴張轉(zhuǎn)換器組成 ,與執(zhí)行本地圖像塊匹配。
圖 4 層次分類
圖5 步行步態(tài)
該輸入處理器是作為參考程序塊和一個圖像搜索窗口形象。該大小的參考程序塊可達16*16像素。該大小的搜索窗口取決于參考塊的大小通常高達 32*32 像素,以便它能夠包括 16 * 16且匹配。該處理器計算價值 256 薩赫勒(總和絕對 差)之間的參考塊和 256 塊在搜索窗口,還找到最佳匹配塊,這就是其中的最低薩赫勒價值。
當(dāng)目標(biāo)平移時塊匹配是非常有力的。然而,普通的塊匹配方法當(dāng)它旋轉(zhuǎn)時無法跟蹤目標(biāo)。為了克服這一困難,我們開發(fā)了一種新方法,跟隨真正旋轉(zhuǎn)目標(biāo)的 候選模板。旋轉(zhuǎn)模板法首先生成所有目標(biāo)圖像旋轉(zhuǎn),并且?guī)讉€足夠的候選參考模 板被選擇并跟蹤前面圖的場景相匹配。圖 5 展示了一個平衡實驗。在這個實驗中 機器人站在傾斜的木板上。機器人視覺跟蹤著前面的場景。它會記住一個物體垂 直方向作為視覺跟蹤的參照并產(chǎn)生了旋轉(zhuǎn)圖像的參考圖象。如果視覺跟蹤的參考 對象使用旋轉(zhuǎn)圖像,它可以衡量身體旋轉(zhuǎn)。 為了保持身體平衡,機器人的反饋控 制其身體旋轉(zhuǎn)來控制中心機體的重心。旋轉(zhuǎn)視覺跟蹤可以跟蹤視頻圖像率。
圖 6 雙足步行
該輸入處理器是作為參考程序塊和一個圖像搜索窗口形象.該大小的參考程序塊 可達 16*16 像素.該大小的搜索窗口取決于參考塊的大小通常高達 32*32 像素,以便它能夠包括16 * 16且匹配。該處理器計算價值 256 薩赫勒(總和絕對差)之間的參考塊和256塊在搜索窗口,還找到最佳匹配塊,這就是其中的最低薩赫勒價值。
當(dāng)目標(biāo)平移時塊匹配是非常有力的。然而,普通的塊匹配方法當(dāng)它旋轉(zhuǎn)時無法跟蹤目標(biāo)。為了克服這一困難,我們開發(fā)了一種新方法,跟隨真正旋轉(zhuǎn)目標(biāo)的候選模板。旋轉(zhuǎn)模板法首先生成所有目標(biāo)圖像旋轉(zhuǎn),并且?guī)讉€足夠的候選參考模板被選擇并跟蹤前面圖的場景相匹配。
圖5展示了一個平衡實驗。在這個實驗中機器人站在傾斜的木板上。機器人視覺跟蹤著前面的場景。它會記住一個物體垂直方向作為視覺跟蹤的參照并產(chǎn)生 了旋轉(zhuǎn)圖像的參考圖象。如果視覺跟蹤的參考對象使用旋轉(zhuǎn)圖像,它可以衡量身 體旋轉(zhuǎn)。為了保持身體平衡,機器人的反饋控制其身體旋轉(zhuǎn)來控制中心機體的重心。旋轉(zhuǎn)視覺跟蹤可以跟蹤視頻圖像率。
圖 7 雙足步行實驗
5 雙足步行
如果一個雙足機器人可以自由的控制機器人的重心,它可以執(zhí)行雙足行走。 展示在圖7的機器人在腳踝的位置有以左和以右的角度,它可以在特定的方式下 執(zhí)行雙足行走。該一個周期的一系列運動由八個階段組成,如圖6所示。一個步驟包括四個階段:移動腳的重力中心,抬腿,向前移動,換腿。由于身體被描述用 實體模型,根據(jù)重心參數(shù)機器人可以產(chǎn)生一個機構(gòu)配置移動重力中心。這一運動后,機器人可以抬起另一條腿并且向前走。在抬腿過程中機器人必須操縱機構(gòu)配置,以保持支持腳上的重心。依賴于重心的高度作為平衡的穩(wěn)定性,機器人選擇 合適的膝蓋角度.圖7顯示了一系列雙足機器人行走的實驗。
6 滾動和站立
圖8顯示了一系列滾動,坐著和站起來的動作。這個動作要求胳膊和腿之間的協(xié)調(diào)。由于步行機器人有一個電池,該機器人可使用電池的重量做翻轉(zhuǎn)動作。當(dāng)機器人抬起左腿,向后移動左臂且右臂向前,它可以得到機體周圍的旋轉(zhuǎn)力矩。如果身體開始轉(zhuǎn)動,右腿向后移動并且左腳依賴臉部返回原來位置。翻滾運動身體的變化方向從仰視到俯視。它可通過方向傳感器核查。得到正面朝下的方向后,向下移動機器人的手臂以坐在兩個腳上。這個動作引起了雙手和地面之間的滑動。如果手臂的長度不夠達到在腳上的身體重心,這個坐的運動要求有手臂來推動運動。站立運動是被控制的,以保持平衡。
圖 8 一系列滾動和站立運動
7 通過集成傳感器網(wǎng)絡(luò)轉(zhuǎn)型的綜合
為了使上述描述的基本動作成為一體,我們通過一種方法來描述一種被認(rèn)為是根據(jù)傳感器狀況的網(wǎng)絡(luò)轉(zhuǎn)型。圖9顯示了綜合了基本動作機器人的狀態(tài)轉(zhuǎn)移圖: 兩足行走,滾動,坐著和站立。這種一體化提供了機器人保持行走甚至跌倒時的能力。普通的雙足行走是由兩步組成,連續(xù)的左腿在前和右腿在前。這個姿勢依賴于背部和“臉部”和“站立”是一樣的。也就是說,機器人的機體形狀是相同的,但方向是不同的。
該機器人可以探測機器人是否依賴于背部或面部使用方向傳感器。當(dāng)機器人發(fā)覺跌倒時,它改變了依賴于背部或腹部通過移動不確定姿勢的狀況。如果機器人依賴于背部起來,一系列的動作將被計劃執(zhí)行:翻轉(zhuǎn)、坐下和站立動作。如果這種情況是依賴于臉部,它不執(zhí)行翻轉(zhuǎn)而是移動手臂執(zhí)行坐的動作。
8結(jié)束語
本文提出了一個兩手臂的可以執(zhí)行靜態(tài)雙足行走,翻轉(zhuǎn)和站立動作的機器人。 建立這種行為的關(guān)鍵是遠(yuǎn)程腦方法。正如實驗表明,無線技術(shù)允許機體自由移動。這似乎也改變我們概念化機器人的一種方式。在我們的實驗室已經(jīng)發(fā)展一種新的 研究環(huán)境,更適合于機器人和真實世界的人工智能。
這里提出的機器人是一個有腿的機器人。我們的視覺系統(tǒng)是基于高速塊匹配功能實施大規(guī)模集成電路的運動估算。視覺系統(tǒng)提供了與人交往作用的機體活力和適應(yīng)能力。機械狗表現(xiàn)出建立在跟蹤測距的基礎(chǔ)上的適應(yīng)行為。機械類人猿已經(jīng)表明跟蹤和記憶的視覺功能和它們在互動行為上的綜合。
一個兩手臂機器人的研究為智能機器人研究提供了一個新的領(lǐng)域。因為它的各種行為可能造成一個靈活的機體。遠(yuǎn)程腦方法將支持以學(xué)習(xí)為基礎(chǔ)行為的研 究領(lǐng)域。下一個研究任務(wù)包括:如何借鑒人類行為以及如何讓機器人提高自身的學(xué)術(shù)行為。
Multi-degree of freedom walking robot
Masayuki INABA, Fumio KANEHIRO
Satoshi KAGAMI, Hirochika INOUE
Department of Mechano-Informatics
The University of Tokyo
7-3-l Hongo, Bunkyo-ku, 113 Tokyo, JAPAN
Abstract
Focusing attention on flexibility and intelligent reactivity in the real world, it is more important to build, not a robot that won’t fall down, but a robot that can get up if it does full down. This paper presents a research on a two-armed bipedal robot, an apelike robot, which can perform biped walking, rolling over and standing up. The robot consists of a head, two arms, and two legs. The control system of the biped robot is designed based on the remote-brained approach in which a robot does not bring its own brain within the body and talks with it by radio links. This remote-brained approach enables a robot to have both a heavy brain with powerful computation and a lightweight body with multiple joints. The robot can keep balance in standing using tracking vision, detect whether it falls down or not by a set of vertical sensors, and perform getting up motion colaborating two arms and two legs. The developed system and experimental results are described with illustrated real examples.
1 Introduction
As human children show, it is indispensable to have capability of getting up motion in order to learn biped locomotion. In order to build a robot which tries to learn biped walking automatically, the body should be designed to have structures to support getting up as well as sensors to know whether it lays or not.
When a biped robot has arms, it can perform various behaviors as well as walking. Research on biped walking robots has presented with realization.It has mainly focused on the dynamics in walking,treating it as an advanced problem in control.However, focusing attention on the intelligent reactivity in the real world, it is more important to build, not a robot that won’t fall down, but a robot that can get up if it does fall down.
In order to build a robot that can get up if it falls down, the robot needs sensing system to keep the body balance and to know whether it falls down or not. Although vision is one of the most important sensing functions of a robot, it is hard to build a robot with a powerful vision system on its own body because of the size and power limitation of a vision system. If we want to advance research on vision-based robot behaviors requiring dynamic reactions and intelligent reasoning based on experience, the robot body has to be lightweight enough to react quickly and have many DOFS in actuation to show a variety of intelligent behaviors.
As for the legged robot ,there is only a little research on vision-based behaviors. The difficulties in advancing experimental research for vision-based legged robots are caused by the limitation of the vision hardware. It is hard to keep developing advanced vision software in limited hardware. In order to solve theproblems and advance the study of vision-based behaviors, we have adopted a new approach through building remote-brained robots. The body and the brain are connected by wireless links by using wireless cameras and remote-controlled actuators.As a robot body does not need computers on-board,it becomes easier to build a lightweight body with many DOFS in actuation.
In this research, we developed a two-armed bipedal robot using the remote-brained robot environment and made it to perform balancing based on vision and getting up through cooperating arms and legs. The system and experimental results are described below.
2 The Remote-Brained System
The remote-brained robot does not bring its own brain within the body. It leaves the brain in the mother environment and communicates with it by radio links. This allows us to build a robot with a free body and a heavy brain. The connection link between the body and the brain defines the interface between software and hardware. Bodies are designed to suit each research project and task. This enables us advance in performing research with a variety of real robot systems.
A major advantage of remote-brained robots is that the robot can have a large and heavy brain based on super parallel computers. Although hardware technology for vision has advanced and produced powerful compact vision systems, the size of the hardware is still large. Wireless connection between the camera and the vision processor has been a research tool. The remote-brained approach allows us to progress in the study of a variety of experimental issues in vision-based robotics.
Another advantage of remote-brained approach is that the robot bodies can be lightweight. This opens up the possibility of working with legged mobile robots. As with animals, if a robot has 4 limbs it can walk. We are focusing on vision-based adaptive behaviors of 4-limbed robots, mechanical animals, experimenting in a field as yet not much studied.
The brain is raised in the mother environment in-herited over generations. The brain and the mother environment can be shared with newly designed robots. A developer using the environment can concentrate on the functional design of a brain. For robots where the brain is raised in a mother environment, it can benefit directly from the mother’s ‘evolution’, meaning that the software gains power easily when the mother is upgraded to a more powerful computer.Figure 1 shows the configuration of the remote-brained system which consists of brain base, robot body and brain-body interface.
In the remote-brained approach the design and theperformance of the interface between brain and body is the key. Our current implementation adopts a fully remotely brained approach, which means the body has no computer onboard. Current system consists of the vision subsystems, the non-vision sensor subsystem and the motion control subsystem. A block can receive video signals from cameras on robot bodies. The vision subsystems are parallel sets each consisting of eight vision boards.
A body just has a receiver for motion instruction signals and a transmitter for sensor signals. The sensor information is transmitted from a video transmitter. It is possible to transmit other sensor information such as touch and servo error through the video transmitter by integrating the signals into a video image. The actuator is a geared module which includes an analog servo circuit and receives a position reference value from the motion receiver. The motion control subsystem can handle up to 104 actuators through 13 wave bands and send the reference values to all the actuators every 20msec.
3 The Two-Armed Bipedal Robot
Figure 2 shows the structure of the two-armed bipedal robot. The main electric components of the robot are joint servo actuators, control signal receivers, an orientation sensor with transmitter, a battery set for actuators and sensors sensor and a camera with video transmitter. There is no computer on-board. A servo actuator includes a geared motor and analog servo circuit in the box. The control signal to each servo module is position reference. The torque of servo modules available cover 2Kgcm - 14Kgcm with the speed about 0.2sec/60deg. The control signal transmitted on radio link encodes eight reference values. The robot in figure 2 has two receiver modules onboard to control 16 actuators.
Figure 3 explains the orientation sensor using a set of vertical switches. The vertical switch is a mercury switch. When the mercury switch (a) is tilted, the drop of mercury closes the contact between the two electrodes. The orientation sensor mount two mercury switches such as shown in (b). The switches provides two bits signal to detect four orientation of the sensor as shown in (c). The robot has this sensor at its chest and it can distinguish four orientation; face up, face down, standing and upside down.
The body structure is designed and simulated in the mother environment. The kinematic model of the body is described in an object-oriented lisp, Euslisp which has enabled us to describe the geometric solid model and window interface for behavior design.
Figure 4 shows some of the classes in the programming environent for remote-brained robot written in Euslisp. The hierachy in the classes provides us with rich facilities for extending development of various robots.
4 Vision-Based Balancing
The robot can stand up on two legs. As it can change the gravity center of its body by controling the ankle angles, it can perform static bipedal walks. During static walking the robot has to control its body balance if the ground is not flat and stable.
In order to perform vision-based balancing it is re-quired to have high speed vision system to keep ob-serving moving schene. We have developed a tracking vision board using a correlation chip. The vision board consists of a transputer augmented with a special LSI chip(MEP) : Motion Estimation Processor) which performs local image block matching.
The inputs to the processor MEP are an image as a reference block and an image for a search window.The size of the reference blsearch window depends on the size of the reference block is usually up to 32 by 32 pixels so that it can include 16 * 16 possible matches. The processor calculates 256 values of SAD (sum of absolute difference) between the reference block and 256 blocks in the search window and also finds the best matching block, that is, the one which has the minimum SAD value. Clock is up to 16 by 16 pixels.The size of the search window depends on the size of the reference block is usually up to 32 by 32 pixels so that it can include 16 * 16 possible matches. The processor calculates 256 values of SAD (sum of absolute difference) between the reference block and 256 blocks in the search window and also finds the best matching block, that is, the one which has the minimum SAD value.
Block matching is very powerful when the target moves only in translation. However, the ordinary block matching method cannot track the target when it rotates. In order to overcome this difficulty, we developed a new method which follows up the candidate templates to real rotation of the target. The rotated template method first generates all the rotated target images in advance, and several adequate candidates of the reference template are selected and matched is tracking the scene in the front view. It remembers the vertical orientation of an object as the reference for visual tracking and generates several rotated images of the reference image. If the vision tracks the reference object using the rotated images, it can measures the body rotation. In order to keep the body balance, the robot feedback controls its body rotation to control the center of the body gravity. The rotational visual tracker can track the image at video rate.
5 Biped Walking
If a bipedal robot can control the center of gravity freely, it can perform biped walk. As the robot shown in Figure 2 has the degrees to left and right directions at the ankle position, it can perform bipedal walking in static way.
The motion sequence of one cycle in biped walking consists of eight phases as shown in Figure 6. One step consists of four phases; move-gravity-center-on-foot,lift-leg, move-forward-leg, place-leg. As the body is described in solid model, the robot can generate a body configuration for move-gravity-center-on-foot according to the parameter of the hight of the gravity center. After this movement, the robot can lift the other leg and move it forward. In lifting leg, the robot has to control the configuration in order to keep the center of gravity above the supporting foot. As the stability in balance depends on the hight of the gravity center, the robot selects suitable angles of the knees.Figure 7 shows a sequence of experiments of the robot in biped walking
6 Rolling Over and Standing Up
Figure 8 shows the sequence of rolling over, sitting and standing up. This motion requires coordination between arms and legs.
As the robot foot consists of a battery, the robot can make use of the weight of the battery for the roll-over motion. When the robot throws up the left leg and moves the left arm back and the right arm forward, it can get rotary moment around the body. If the body starts turning, the right leg moves back and the left foot returns its position to lie on the face. This rollover motion changes the body orientation from face up to face down. It can be verified by the orientation sensor.
After getting face down orientation, the robot moves the arms down to sit on two feet. This motion causes slip movement between hands and the ground. If the length of the arm is not enough to carry the center of gravity of the body onto feet, this sitting motion requires dynamic pushing motion by arms. The standing motion is controlled in order to keep the balance.
7 Integration through Building Sensor-Based Transition Net
In order to integrate the basic actions described above, we adopted a method to describe a sensor-based transition network in which transition is considered according to sensor status. Figure 9 shows a state transition diagram of the robot which integrates basic actions: biped walking, rolling over, sitting, and standing up. This integration provides the robot with capability of keeping walking even when it falls down.
The ordinary biped walk is composed by taking two states, Left-leg Fore and Right-leg Fore, successively.The poses in ‘Lie on the Back’ and ‘Lie on the Face’are as same as one in ‘Stand’. That is, the shape ofthe robot body is same but the orientation is different.
The robot can detect whether the robot lies on the back or the face using the orientation sensor. When the robot detects falls down, it changes the state to ‘Lie on the Back’ or ‘Lie on the Front’ by moving to the neutral pose. If the robot gets up from ‘Lie on the Back’, the motion sequence is planned to execute Roll-over, Sit and Stand-up motions. If the state is ‘Lie on the Face’, it does not execute Roll-over but moves arms up to perform the sitting motion.
8 Concluding Remarks
This paper has presented a two-armed bipedal robot which can perform statically biped walk, rolling over and standing up motions. The key to build such behaviors is the remote-brained approach. As the experiments have shown, wireless technologies permit robot bodies free movement. It also seems to change the way we conceptualize robotics. In our laboratory it has enabled the development of a new research environment, better suited to robotics and real-world AI.
The robot presented here is a legged robot. As legged locomotion requires dynamic visual feedback control, its vision-based behaviors can prove the effectiveness of the vision system and the remote-brained system. Our vision system is based on high speed block matching functi