家用多功能和面機(jī)的設(shè)計(jì)【雙葉片攪拌和面容量為10kg】
家用多功能和面機(jī)的設(shè)計(jì)【雙葉片攪拌和面容量為10kg】,雙葉片攪拌,和面容量為10kg,家用多功能和面機(jī)的設(shè)計(jì)【雙葉片攪拌,和面容量為10kg】,家用,多功能,和面,設(shè)計(jì),葉片,攪拌,容量,10,kg
家用多功能和面機(jī)盧佳敏工學(xué)院農(nóng)業(yè)機(jī)械化及其自動(dòng)化專業(yè)2014年5月中國(guó)是世界上最大的面制品生產(chǎn)國(guó),而和面機(jī)是其生產(chǎn)加工的重要設(shè)備其性能的好壞、結(jié)構(gòu)的正誤。將直接影響食品的營(yíng)養(yǎng)、感官等質(zhì)量指標(biāo),進(jìn)而影響著企業(yè)的經(jīng)濟(jì)效益和社會(huì)效益。隨著中國(guó)綜合實(shí)力的躍升,消費(fèi)者對(duì)食品的質(zhì)量前所未有的關(guān)注,這也就對(duì)和面機(jī)市場(chǎng)形成一種倒逼驅(qū)動(dòng)力,只有不斷研發(fā)出更好的和面機(jī),才能立足于市場(chǎng)。隨著人們生活水平的不斷提高,越來(lái)越多的人喜歡自己在家里制作面點(diǎn),不僅僅是為了滿足自己的食欲,更是一種精神的上的享受。但是揉面團(tuán)是個(gè)費(fèi)力又費(fèi)時(shí)的事,因此家用和面機(jī)應(yīng)運(yùn)而生。目錄1234總裝圖翻缸卸料裝置部件圖攪拌傳動(dòng)系統(tǒng)部件圖攪拌裝置5支架總裝圖翻缸裝置攪拌傳動(dòng)系統(tǒng)攪拌裝置支架翻缸卸料裝置部件圖卸料方式主要有底部卸料和翻缸卸料,該設(shè)計(jì)和面量較小,適合采用電動(dòng)翻缸卸料攪拌傳動(dòng)系統(tǒng)部件圖攪拌傳動(dòng)形式:選用三相異步電機(jī),通過(guò)帶輪、兩級(jí)齒輪傳動(dòng),帶動(dòng)攪拌器旋轉(zhuǎn)。攪拌裝置攪拌器也稱作攪拌槳,是和面機(jī)最重要的部件。按攪拌軸數(shù)目分,有單軸式和雙軸式兩種。臥式的與立式的也有所不同。本次設(shè)計(jì)的絞龍既適用和面粉,也可以用于打年糕,以及打雞蛋等支架機(jī)架采用角鋼,底板用10mm鋼板。尺寸依結(jié)構(gòu)而定,焊接方式連接。小結(jié)本次設(shè)計(jì),針對(duì)家用多功能和面機(jī)的結(jié)構(gòu)特點(diǎn),詳細(xì)設(shè)計(jì)了其攪拌裝置、攪拌傳動(dòng)系統(tǒng)、卸料傳動(dòng)系統(tǒng)。在不斷的發(fā)現(xiàn)問(wèn)題和解決問(wèn)題的過(guò)程中,收獲頗多。在此過(guò)程中,大學(xué)所學(xué)知識(shí)得到了淬煉,繁雜而無(wú)趣的知識(shí)趨于系統(tǒng)。在設(shè)計(jì)思路和方法的掌握上,開始有了個(gè)雛形,知道面對(duì)一個(gè)項(xiàng)目該如何去查閱材料,怎樣去把握結(jié)構(gòu)、確定方案。在日后的工作中,本次設(shè)計(jì)所得經(jīng)驗(yàn)與成果,經(jīng)過(guò)不斷完善后將會(huì)為我人生一個(gè)寶貴的財(cái)富。在思維上,對(duì)于專業(yè)的問(wèn)題思路更加趨于嚴(yán)謹(jǐn)。也能體會(huì)到嚴(yán)謹(jǐn)?shù)乃季S對(duì)于設(shè)計(jì)人員的重要性。可能一個(gè)考慮不周就會(huì)將整個(gè)設(shè)計(jì)推翻,一切都得從頭再來(lái)。謝謝!
JIANGXI AGRICULTURAL UNIVERSITY
本 科 畢 業(yè) 論 文(設(shè) 計(jì))
題目: 家用多功能和面機(jī)設(shè)計(jì)
學(xué) 院: 工 學(xué) 院
姓 名: 盧佳敏
學(xué) 號(hào): 20100998
專 業(yè): 農(nóng)業(yè)機(jī)械化及其自動(dòng)化
年 級(jí): 農(nóng)機(jī)101班
指導(dǎo)教師: 蔡金平 職稱:講 師
二 0 一 四 年 五 月
摘 要
和面機(jī)的實(shí)質(zhì)作用就是進(jìn)行水和面粉的攪拌混合,主要用于年糕、餃子皮、蛋糕、膨化食品、面條、餛飩皮等面食的加工,揉面團(tuán)的好壞直接影響了面食的口感。本設(shè)計(jì)中和面機(jī)采用雙葉片攪拌,和面容量為10kg,適合調(diào)制各種面團(tuán),攪拌裝置選擇了雙葉片攪拌器,由主電機(jī)帶動(dòng)帶輪,通過(guò)齒輪減速傳動(dòng)給攪拌軸。和面完成后打開步進(jìn)電機(jī),方便取出面團(tuán)。本和面機(jī)結(jié)構(gòu)簡(jiǎn)單,用途廣泛,大到農(nóng)村人家打年糕,小到三口之家包餃子。
關(guān)鍵詞:絞龍;多功能;和面機(jī)
Abstract
Mixer is the essence of the action of water and flour, mixing is mainly used for rice cake, the wrappers, cakes, puffed food, noodles, wonton skins, such as pasta processing, knead the dough quality directly affect the taste of the pasta.In this design, including the working principle of mixer, its parts design and the technical requirements for design of mixer, and the detailed design and 10 kg/mixer, including mixing device, transmission system, feeding system and draw the mixer equipment assembly diagram, parts diagram and component diagram.Design model with double mixing blades, a powder capacity of 10 kg, suitable for all kinds of modulation dough.The reference to the design process and T - 66 type horizontal mixer, mainly characterized by mixing system chose double vane stirrer;Mixing system chooses three-phase asynchronous motor speed reducer, with belt wheel and two stage gear transmission;Choose stepper motor speed reducer, double cylinder system adopts with worm gear and worm drive and gear transmission.Mixer has simple structure, wide range of USES, big to the somebody else dozen rice cakes in the countryside, small to a family of three pack dumpling.
Key words: heave the dragon;multi-function;dough
II
目 錄
摘 要 II
Abstract III
目 錄 V
1 緒論 1
1.1 本課題的研究目的 1
1.2 我國(guó)的和面機(jī)市場(chǎng) 1
1.3 本課題應(yīng)達(dá)到的要求 1
2 和面機(jī)初步設(shè)計(jì) 1
2.1 設(shè)計(jì)準(zhǔn)備 1
2.1.1 和面的基本過(guò)程 1
2.1.2 和面機(jī)各部件設(shè)計(jì) 1
2.2 家用多功能和面機(jī)設(shè)計(jì) 2
2.2.1 確定和面機(jī)容量 2
2.2.2 總方案的設(shè)計(jì) 3
2.2.3 攪拌裝置的設(shè)計(jì) 4
2.2.4 攪拌軸傳動(dòng)系統(tǒng)的設(shè)計(jì) 6
2.2.5 取料裝置的設(shè)計(jì) 21
2.2.6 支架的設(shè)計(jì) 28
2.2.7 家用和面機(jī)裝配圖 28
3 小結(jié) 30
參考文獻(xiàn) 31
致謝 32
家用多功能和面機(jī)
1 緒論
1.1 本課題的研究目的
和面機(jī)的實(shí)質(zhì)作用就是進(jìn)行水和面粉的攪拌混合,主要用于年糕、餃子皮、蛋糕、膨化食品、面條、餛飩皮等面食的加工,揉面團(tuán)的好壞直接影響了面食的口感。和面機(jī)在實(shí)際使用過(guò)程中,常常根據(jù)需要人為控制,選擇不同的攪拌速度、攪拌裝置形狀,才能讓面團(tuán)的質(zhì)量得到保證。本課題的任務(wù)是設(shè)計(jì)一臺(tái)具有多功能的家用和面機(jī),主要適用于加工小面團(tuán)。
1.2 我國(guó)的和面機(jī)市場(chǎng)
現(xiàn)如今我國(guó)的和面機(jī)種類繁多,有大型食品工廠用的,有小型家庭用的。面食自古就是我們中國(guó)人比較喜愛(ài)的食物。但是揉面不是每個(gè)中國(guó)人都能掌握的一項(xiàng)技術(shù),所以對(duì)于現(xiàn)在的我們吃一口家常面點(diǎn)變的越來(lái)越難了。究其原因,大部分人都是被和面這第一步難倒了。我們的家用多功能和面機(jī)就應(yīng)運(yùn)而生。
1.3 本課題應(yīng)達(dá)到的要求
本課題的任務(wù)是設(shè)計(jì)一臺(tái)具有多功能的小型和面機(jī),為小規(guī)模食品加工帶來(lái)便利和效益。
2 和面機(jī)初步設(shè)計(jì)
2.1 設(shè)計(jì)準(zhǔn)備
2.1.1 和面的基本過(guò)程
和面機(jī)調(diào)制面團(tuán)的基本過(guò)程是我們先添加一定量的面粉(以和面為例),然后加入適量水(根據(jù)我們需要的面團(tuán)用途),啟動(dòng)和面機(jī)。攪拌軸旋轉(zhuǎn)進(jìn)行攪拌混合,然后面粉和水分慢慢的混合,直到兩者混合均勻,面團(tuán)圓滑。步進(jìn)電機(jī)反轉(zhuǎn),翻缸,取出面團(tuán),和面完成。
2.1.2 和面機(jī)各部件設(shè)計(jì)
和面機(jī)功能介紹:功能多樣,用途廣泛,可以用來(lái)和面、打年糕、打雞蛋等。
1、攪拌器
攪拌器也叫攪拌槳,是和面機(jī)最重要的部件。按攪拌軸數(shù)目分,有單軸式和雙軸式兩種。臥式的與立式的也有所不同。本次設(shè)計(jì)的絞龍既適用和面粉,也可以用于打年糕,以及打雞蛋等。
2、攪拌容器
容器多由不銹鋼焊接而成。容器的容積由和面機(jī)的適用的面團(tuán)大小決定。
為了防止軸承處物料或潤(rùn)滑油泄漏,污染食品,容器與攪拌軸之間必須密封。因?yàn)楹兔鏅C(jī)軸的轉(zhuǎn)速普遍低,工作載荷變化又大,密封處間隙變化無(wú)常,本設(shè)計(jì)的密封裝置采用了應(yīng)J型無(wú)滑架橡膠密封圈。當(dāng)然也可以采用空氣端面密封裝置,密封效果更完美。
本設(shè)計(jì)采用機(jī)動(dòng)翻轉(zhuǎn)容器,由電動(dòng)機(jī),減速器及容器翻轉(zhuǎn)齒輪組成。和面完成后,打開步進(jìn)電機(jī),通過(guò)蝸輪蝸桿、減速器帶動(dòng)與容器固接的齒輪轉(zhuǎn)動(dòng),使容器翻轉(zhuǎn)一定的角度,方便使用者取出面團(tuán)。當(dāng)攪拌軸攪拌時(shí)通過(guò)蝸輪蝸桿的自鎖原理將其鎖死。
3、機(jī)架
小型和面機(jī)攪拌軸轉(zhuǎn)速低,工作阻力大,振動(dòng)不會(huì)很劇烈,噪音也不會(huì)很大。本設(shè)計(jì)的機(jī)架結(jié)構(gòu)采用焊接結(jié)構(gòu)。
4、傳動(dòng)裝置
本和面機(jī)的傳動(dòng)是由電動(dòng)機(jī),皮帶輪,減速器及聯(lián)軸器等組成。
和面機(jī)工作轉(zhuǎn)速低,多為25-35r/min,所以應(yīng)選用較大的減速比。本設(shè)計(jì)采用行星輪減速器,其傳動(dòng)效率高,結(jié)構(gòu)緊湊。
表2-1 和面機(jī)容量與所配電機(jī)額定功率的關(guān)系
生產(chǎn)力
(㎏/次)
12
25
50
75
100
主電機(jī)
(千瓦)
1.1
2.2
3.0
4.0
5.5
2.2 家用多功能和面機(jī)設(shè)計(jì)
2.2.1 確定和面機(jī)容量
一般選5㎏/次、10㎏/次、15㎏/次。
設(shè)計(jì)10kg/次,和面時(shí)間為10min。
2.2.2 總方案的設(shè)計(jì)
1、攪拌容器的總體尺寸
攪拌時(shí),攪拌容器的裝料系數(shù)可取0.5-0.6
查有關(guān)資料可得我國(guó)面粉的吸水系數(shù)為15%
一般面粉的密度為0.5g/ml, 和面過(guò)程中加40%的水(以和面為例),水的密度為1g/ml。
面粉的質(zhì)量:10kg 水的質(zhì)量:4kg
計(jì)算得總體積為:10000/0.5+4000/1=70000mm3
所以取攪拌容器容積為200000mm3
(1)寬度(B)
B = 2 ( R+δ) 式中 R = 攪拌漿半徑 R的大小取決于面團(tuán)性質(zhì)及生產(chǎn)(2.1)
δ= 漿葉與容器的間隙
δ取為5mm
δ=5 mm B=2(R+5)
(2) 高度( H )
H = h +B 式中 h=(0.5-1)R 取h=0.5R (2.2)
(3)長(zhǎng)度( L )
L = (2-2.5)R (2.3)
攪拌容器的體積為V= (2.4)
經(jīng)計(jì)算可取R=25cm
取 B=50cm H=66.5cm L=75cm
選擇本設(shè)計(jì)的攪拌容器材料為1Ge18Ni9Ti
2、攪拌漿的選擇
本設(shè)計(jì)選擇的攪拌器,攪拌軸上設(shè)置2個(gè)攪拌葉片,攪拌葉片是兩片對(duì)稱固定在攪拌軸上的弧形葉片,葉片與攪拌軸旋轉(zhuǎn)1/4圓連接,主要適用于加工小型面團(tuán)。這種攪拌裝置適合多種作業(yè),比如和面團(tuán),打雞蛋以及做蛋糕等。攪拌均勻,攪拌效率高。
與容器壁間距1-2cm,選擇軸半徑為d=23cm。
3、攪拌傳動(dòng)的選擇
選用三相異步電機(jī),通過(guò)帶輪、兩級(jí)齒輪傳動(dòng),帶動(dòng)攪拌裝置旋轉(zhuǎn)。
4、卸料形式
本設(shè)計(jì)和面量較小,因此選擇電動(dòng)翻缸的方式。系統(tǒng)副電動(dòng)機(jī)安裝在機(jī)座上,通過(guò)蝸桿、蝸輪、齒輪傳動(dòng)帶動(dòng)筒體傳動(dòng),和面過(guò)程中利用蝸輪蝸桿自鎖效應(yīng),控制旋轉(zhuǎn)角度。
2.2.3 攪拌裝置設(shè)計(jì)
1、攪拌容器設(shè)計(jì)
容器壁設(shè)計(jì)為3mm,兩側(cè)連接軸承支承處設(shè)置加強(qiáng)圈,厚度為2mm,每處兩個(gè),采用焊接結(jié)構(gòu),選擇本設(shè)計(jì)的攪拌容器材料為1Ge18Ni9Ti。結(jié)構(gòu)如圖2.1所示。
圖2.1攪拌容器
具體尺寸及參數(shù)要求見設(shè)計(jì)圖紙
2、攪拌器設(shè)計(jì)
設(shè)計(jì)攪拌葉片寬度為50mm,厚度為4mm,徑向旋轉(zhuǎn)角度為1/4圓,軸向長(zhǎng)度為450mm。攪拌葉片厚度設(shè)計(jì)為10mm,根部寬度為50mm,頂部寬度為20mm。為輪轂內(nèi)徑85mm,外徑為100mm,寬度為40mm。攪拌葉片、輪轂采用焊接方式連接,材料選用1Ge18Ni9Ti。結(jié)構(gòu)簡(jiǎn)圖如圖2.2。具體參數(shù)見設(shè)計(jì)圖紙。
圖2.2 攪拌器結(jié)構(gòu)簡(jiǎn)圖
3、攪拌裝置支座設(shè)計(jì)
材料取為鑄鐵HT150,基本結(jié)構(gòu)如圖2.3
圖2.3攪拌裝置支座
具體尺寸、參數(shù)見設(shè)計(jì)圖紙
4、其它零件設(shè)計(jì)
材料取為45鋼,攪拌軸左、右支撐,如圖2.4
圖2.4攪拌軸支撐左、右蓋
裝配關(guān)系見裝配圖
5、 攪拌系統(tǒng)部件圖
圖2.14攪拌裝置部件圖
2.2.4 攪拌軸傳動(dòng)系統(tǒng)的設(shè)計(jì)
1、主電機(jī)選擇
表2-2 和面機(jī)容量與所配電機(jī)額定功率的關(guān)系
生產(chǎn)力
(㎏/次)
12
25
50
75
100
主電機(jī)
(千瓦)
1.1
2.2
3.0
4.0
5.5
根據(jù)表格2-2中和面機(jī)容量與所配電機(jī)額定功率的關(guān)系,選用3kw的電機(jī),查設(shè)計(jì)手冊(cè)選用通用異步電機(jī)Y132s-6型號(hào),轉(zhuǎn)速960r/s。
2、傳動(dòng)系統(tǒng)中傳動(dòng)鏈的設(shè)計(jì)及各傳動(dòng)比的分配設(shè)計(jì)
(1)攪拌漿轉(zhuǎn)速 n=14~35 rpm
各種面團(tuán)所需的轉(zhuǎn)速不同
現(xiàn)取n≈20rpm。
(2)電機(jī)轉(zhuǎn)速 = 960rpm 同步轉(zhuǎn)速
(3)傳動(dòng)鏈總傳動(dòng)比 =48 (2.5)
(4)傳動(dòng)比分配:
①皮帶傳動(dòng)
3
取=3
②齒輪傳動(dòng)
=2,=8
3、皮帶傳動(dòng)設(shè)計(jì)
(1)電機(jī)功率p=3kw, =1.1 ,計(jì)算功率=p=3.3kw (2.6)
(2)選擇v帶型號(hào)
根據(jù)=3.3kw,n1=960r/min,查圖得選用A型v帶
(3)求大、小帶輪基準(zhǔn)直徑、
① 取=100mm
② 驗(yàn)算帶速v==5.02 (2.7)
5<5.02<30,符合要求
③ ==300 (2.8)
由i=3 得n2 =320r/min
圓整后得=315mm
④ 求v帶基準(zhǔn)長(zhǎng)度Ld和中心距a
由0.7(+ )<<2(+ )
取=450mm
帶長(zhǎng)=2+π(+ )/2 + (-)2/(4)=1577.2mm (2.9)
由長(zhǎng)度系列表查得選用帶長(zhǎng)=1600mm
計(jì)算實(shí)際中心距
a=+=4500+(1600-1577.2)/2=461 (2.10)
⑤ 驗(yàn)算小帶輪包角α (2.11)
α =1800-(-)/a×57.30 =153.80>900
合適
⑥ 求V帶根數(shù)z
計(jì)算單根v帶的額定功率
查表8-4a,=0.97kw
查表8-4b,=0.12kw
查表8-5,=0.93 查表8-2,=0.99
=(+)=1.00kw (2.12)
由此可得
Z==3.3 (2.13)
取4根
⑦ 材料的選擇
帶輪材料取為鑄鐵HT150 ,V帶由復(fù)合材料抗拉體,頂膠,底膠和包布做成。
⑧ 結(jié)構(gòu)
小帶輪采用實(shí)心式結(jié)構(gòu),如圖2.5所示。
圖2.5小帶輪
大帶輪采用橢圓輪輻式結(jié)構(gòu),如圖2.6所示。
圖2.6 大帶輪
具體結(jié)構(gòu)參數(shù)、尺寸見設(shè)計(jì)圖紙
4、齒輪傳動(dòng)設(shè)計(jì)
采用兩級(jí)齒輪傳動(dòng)
=2,=8
(1) 一級(jí)齒輪傳動(dòng)設(shè)計(jì)
p=3kw =320r/min =2
1) 選精度等級(jí)、齒輪類型、材料、齒數(shù)
①直齒輪傳動(dòng)
②一般工作機(jī)器,速度不高,故選用7級(jí)精度
③材料
大齒輪硬度為240HBS的45鋼
小齒輪硬度為280HBS的40cr鋼
④選小齒輪齒數(shù)=24
大齒輪齒數(shù)=48
2) (2.14)
①選載荷系數(shù)=1.3
②小齒輪傳遞轉(zhuǎn)矩
==8.952×N (2.15)
③由表10-7,取=1
④由表10-6,材料彈性影響系數(shù)=189.8
⑤由圖10-21,按硬度查得[7]
小齒輪接觸疲勞強(qiáng)度極限=600MPa
大齒輪接觸疲勞強(qiáng)度極限=550MPa
⑥應(yīng)力循環(huán)次數(shù)
=1.382× (2.16)
=0.6912×
⑦由圖10-19,接觸疲勞壽命系數(shù)
=0.9 =0.95
⑧計(jì)算接觸疲取勞許用應(yīng)力
取失效概率為1% 安全系數(shù)為s=1
=0.9×600=540MPa (2.17)
0.95×550=522.5MPa
3) ①=66.01mm
②圓周速度v==1.105m/s (2.18)
③齒寬b==66.01mm (2.19)
④齒寬與齒高之比 (2.20)
模數(shù)==2.75 (2.21)
齒高h(yuǎn)=2.25=6.19
==10.66
⑤載荷系數(shù)
=0.675 ==1 =1 =1.315 =1.24
k==0.8870 (2.22)
⑥分度圓直徑=58.113mm
⑦計(jì)算模 =2.41
4) 根據(jù)齒根彎曲強(qiáng)度計(jì)算設(shè)計(jì)
確定參數(shù)
①小齒輪彎曲強(qiáng)度極限=500MPa
大齒輪彎曲強(qiáng)度極限=380MPa
②彎曲疲勞壽命系數(shù)=0.85 =0.88
③取彎曲疲勞安全系數(shù)s=1.4
彎曲許用應(yīng)力 (2.23)
④ k==1×0.675×1×1.24=0.837
⑤齒形系數(shù) =2.65 =2.33
應(yīng)力校正系數(shù) =1.58 =1.69
⑥計(jì)算比較
=0.01379 =0.1649
大齒輪值較大
5) 設(shè)計(jì)計(jì)算
=1.625mm (2.24)
圓整后為2.0mm
=58.113mm
==29
=2×29=58
6) 幾何尺寸計(jì)算
分度圓直徑
中心距a=87mm
齒寬 =58mm 取 (2.25)
7) 結(jié)構(gòu)設(shè)計(jì)
小齒輪因直徑較小,齒根圓到鍵槽底部的距離e<2m,故采用齒輪軸形勢(shì),如圖2.7所示
圖2.7 一級(jí)齒輪軸
大齒輪采用實(shí)心結(jié)構(gòu),如圖2.8所示
圖2.8一級(jí)大齒輪
具體尺寸、參數(shù)見設(shè)計(jì)圖紙
5、二級(jí)齒輪傳動(dòng)設(shè)計(jì)[6]
使用最大功率3kw計(jì)算
p=3kw =160r/min =8
1) 選精度等級(jí)、齒輪類型、材料、齒數(shù)
①直齒輪傳動(dòng)
②一般工作機(jī)器,速度不高,故選用7級(jí)精度
③材料
大齒輪硬度為240HBS的45鋼
小齒輪硬度為280HBS的40cr鋼
④選小齒輪齒數(shù)=20
大齒輪齒數(shù)=160
2)
①選載荷系數(shù)=1.3
②小齒輪傳遞轉(zhuǎn)矩
=1.79×N
③由表10-7,取=0.5
④由表10-6,材料彈性影響系數(shù)=189.8
⑤由圖10-21,按硬度查得
小齒輪接觸疲勞強(qiáng)度極限=600MPa
大齒輪接觸疲勞強(qiáng)度極限=550MPa
⑥應(yīng)力循環(huán)次數(shù)
=6.912×
=0.864×
⑦由圖10-19,接觸疲勞壽命系數(shù)
=0.9 =0.93
⑧計(jì)算接觸疲取勞許用應(yīng)力
取失效概率為1% 安全系數(shù)為s=1
=0.9×600=540MPa
0.95×550=522.5MPa
3) ①=75.5mm
②圓周速度v==0.6322m/s
③齒寬b==75.5mm
④齒寬與齒高之比
模數(shù)=3.775mm
齒高h(yuǎn)=2.25=8.494mm
==8.89
⑤載荷系數(shù)
=1.02 ==1 =1 =1.425 =1.375
k==1.4535
⑥分度圓直徑=79.83mm
⑦計(jì)算模 =4mm
4) 根據(jù)齒根彎曲強(qiáng)度計(jì)算設(shè)計(jì)
確定參數(shù)
①小齒輪彎曲強(qiáng)度極限=500MPa
大齒輪彎曲強(qiáng)度極限=380MPa
②彎曲疲勞壽命系數(shù)=0.85 =0.88
③取彎曲疲勞安全系數(shù)s=1.4
彎曲許用應(yīng)力
④ k==1.4025
⑤齒形系數(shù) =2.8 =2.138
應(yīng)力校正系數(shù) =1.55 =1.835
⑥計(jì)算比較
=0.01430 =0.01642
大齒輪值較大
5) 設(shè)計(jì)計(jì)算
=2.7418mm
圓整后為3.2mm
=79.83mm
==25
=8×25=200
6) 分度圓直徑
中心距a=(80+640)/2=360mm
齒寬 =1×58=80mm 取
7) 結(jié)構(gòu)設(shè)計(jì)[9]
小齒輪采用實(shí)心結(jié)構(gòu),如圖2.8所示
圖2.8二級(jí)小齒輪
大齒輪400mm<<1000,采用輪輻式結(jié)構(gòu),如圖2.9所示
圖2.9 二級(jí)大齒輪
具體結(jié)構(gòu)參數(shù)、尺寸見設(shè)計(jì)圖紙
6、傳動(dòng)軸設(shè)計(jì)
1) 一級(jí)傳動(dòng)軸
V型皮帶傳動(dòng)的效率為0.96,電機(jī)功率為3kw,轉(zhuǎn)速320r/s,
確定軸的最小直徑
=21mm
取最小處直徑為24mm。
軸的結(jié)構(gòu)設(shè)計(jì),如圖2.10所示
圖2.10 一級(jí)傳動(dòng)軸
未注倒角為C1,未注圓角為R1.5
軸材料選用45鋼
按扭矩校核軸的強(qiáng)度
=31MPa
[]=45MPa, 因此<[],故安全
2) 二級(jí)傳動(dòng)軸
V型皮帶傳動(dòng)的效率為0.96,齒輪傳動(dòng)效率為0.97轉(zhuǎn)速160r/s,電機(jī)功率為3kw
確定軸的最小直徑
=26mm
取最小處直徑為30mm。
軸的結(jié)構(gòu)設(shè)計(jì),如圖2.11所示
圖2.11 二級(jí)傳動(dòng)軸
未注倒角為C1.5,未注圓角為R1.5
軸材料選用45鋼
按扭矩校核軸的強(qiáng)度
=30.87MPa
[]=45MPa, 因此<[],故安全
3) 攪拌軸
V型皮帶傳動(dòng)的效率為0.96,齒輪傳動(dòng)效率為0.97轉(zhuǎn)速20r/s,電機(jī)功率為3kw
確定軸的最小直徑,取=0.75
=58.3mm
取最小處直徑為80mm。
軸的結(jié)構(gòu)設(shè)計(jì),如圖2.12所示
圖2.12 攪拌軸
未注倒角為C2,未注圓角為R2
軸材料選用1Cr18Ni9Ti
按扭矩校核軸的強(qiáng)度
=9550000×3×0.96×0.97/(160×0.6836×0.2×30×30×30)=23.1MPa
[]=35MPa, 因此<[],故安全
7、軸承座設(shè)計(jì)[6]
材料取為鑄鐵HT150,基本結(jié)構(gòu)如圖2.13所示
圖2.13 軸承座
具體尺寸、參數(shù)見設(shè)計(jì)圖紙
8、攪拌傳動(dòng)部件圖2.14所示
圖2.14 攪拌傳動(dòng)部件圖
2.2.5 取料裝置設(shè)計(jì)
一級(jí)減速采用蝸桿傳動(dòng)
二級(jí)減速采用齒輪傳動(dòng),大齒輪與筒體固定在一起。
電動(dòng)機(jī)轉(zhuǎn)速為n1=800r/min 取容器轉(zhuǎn)速n3≈10r/min
取=32 =2.5
則=800/2.5/32=10r/min
1、電動(dòng)機(jī)的選擇
選擇步進(jìn)電動(dòng)機(jī)
步進(jìn)電機(jī)最大速度一般在600-1000r/min之間。
取BF系列反應(yīng)式步進(jìn)電動(dòng)機(jī)
型號(hào)為110BF02 3相數(shù) 電壓為80V
最大靜轉(zhuǎn)距為80kgf.cm=50×9.8×10=7840Nmm
負(fù)載轉(zhuǎn)距一般為靜轉(zhuǎn)距的30%-50%,取個(gè)中間值40%
則負(fù)載轉(zhuǎn)距為7840×0.4=3136Nmm
2、蝸桿傳動(dòng)設(shè)計(jì)
(1)選擇蝸桿頭數(shù)z1與蝸輪齒數(shù)z2
查表得傳動(dòng)比i在14-30之間時(shí)[6] z1取2 29<z2<82
取z1=1
由i= n1/ n2=32= z2/ z1 得
Z2 =32 合適
(2)求蝸桿、蝸輪直徑d1 、d2、
取m=2.5 則查表得d1=28mm
q= d1/ m=11.2 d2=m z2=80mm
D1= d1+ m=30.5mm D2= d2+m=82.5mm
蝸桿長(zhǎng)度L≥(11+0.06 z2)m=57.375mm
取為60mm
由表11-4,設(shè)計(jì)渦輪寬度B=25mm,=87.5mm
(3)蝸桿導(dǎo)程角γ
tgγ= z1/q=2/10=0.087
則γ=5.5250 蝸桿與蝸輪的交錯(cuò)角取為900
蝸輪螺旋角β=γ=5.5250
(4)傳動(dòng)的中心距
a=54mm
(5)選擇蝸輪蝸桿材料
蝸桿材料采用40Cr 淬火到45—55HRC 并磨削 齒圈用青銅制造
蝸輪輪芯材料采用45鋼,齒圈用青銅制造 齒圈和輪芯間采用過(guò)盈配合
(6)結(jié)構(gòu)設(shè)計(jì)
蝸桿、渦輪直徑較小,采用蝸桿軸的結(jié)構(gòu)如圖2.15所示。
圖2.15 蝸桿軸
渦輪結(jié)構(gòu)如圖2.16所示
圖2.16 蝸輪軸
3、齒輪計(jì)算設(shè)計(jì)
聯(lián)軸器的傳動(dòng)效率取為0.99,雙頭蝸桿傳動(dòng)的機(jī)械效率為0.75-0.82 取為0.80,=0.263kw
則齒輪傳遞的功率為:P=0.263kw×0.80×0.99=0.2kw
n=25r/s i=2.5
1) 初步確定齒輪幾何尺寸及參數(shù)
模數(shù)m=4mm 齒數(shù) =40 =100
分度圓直徑:=160mm =400mm
齒寬 =30mm =28mm
2) 材料
大齒輪硬度為240HBS的45鋼。
小齒輪硬度為280HBS的40cr鋼。
3) 結(jié)構(gòu)設(shè)計(jì)
齒輪直徑較大,設(shè)計(jì)成腹板式結(jié)構(gòu),如圖2.17和2.18所示。
圖2.17 小齒輪結(jié)構(gòu)圖
圖2.18 大齒輪結(jié)構(gòu)圖
4) 校核
按齒面接觸強(qiáng)度校核:
①選載荷系數(shù)=1.3
②小齒輪傳遞轉(zhuǎn)矩
=5.74×N
③取==0.1875
④查得材料彈性影響系數(shù)=189.8
⑤由材料硬度查得
小齒輪接觸疲勞強(qiáng)度極限=600MPa
大齒輪接觸疲勞強(qiáng)度極限=550MPa
⑥應(yīng)力循環(huán)次數(shù)
=0.216×
=0.108×
⑦由圖10-19,接觸疲勞壽命系數(shù)
=1.03 =1.08
⑧計(jì)算接觸疲取勞許用應(yīng)力
取失效概率為1% 安全系數(shù)為s=1
=1.03×600=618MPa
=1.08×550=594MPa
計(jì)算=49.74mm
所設(shè)計(jì)齒輪=160mm遠(yuǎn)大于,所以合格
4、軸的計(jì)算設(shè)計(jì)
1) 蝸桿軸設(shè)計(jì)
聯(lián)軸器傳動(dòng)的效率為0.99,電機(jī)功率為0.243kw,轉(zhuǎn)速800r/s,
確定軸的最小直徑
=7.72mm
取最小處直徑為16mm。
軸的結(jié)構(gòu)設(shè)計(jì),如圖2.19所示
圖2.19 蝸桿軸
軸材料選用40cr
按扭矩校核軸的強(qiáng)度
=3.79MPa
[]=45MPa, 因此<[],故安全
2) 蝸輪軸計(jì)算設(shè)計(jì)
聯(lián)軸器傳動(dòng)的效率為0.99,蝸桿傳動(dòng)效率為0.80電機(jī)功率為0.243kw,轉(zhuǎn)速 800r/s,i=32
確定軸的最小直徑
=18.7m
取最小處直徑為25mm。
軸的結(jié)構(gòu)設(shè)計(jì),如圖2.20所示
圖2.20 蝸輪軸
軸材料選用45鋼
按扭矩校核軸的強(qiáng)度
=25.46MPa
[]=45MPa, 因此<[],故安全
5、其它零件設(shè)計(jì)
蝸桿軸承座設(shè)計(jì),材料選用鑄鐵,結(jié)構(gòu)如圖2.21所示。
圖2.21 蝸桿軸承座
具體見設(shè)計(jì)圖紙
2) 蝸輪軸承座設(shè)計(jì)
材料選用鑄鐵,結(jié)構(gòu)如圖2.22
圖2.22 蝸輪軸承座
3) 其它
步進(jìn)電機(jī)座設(shè)計(jì),根據(jù)電機(jī)安裝要求確定機(jī)座高度及各定位尺寸,材料選用鑄鐵。
各標(biāo)準(zhǔn)件如鍵、螺栓等見裝配圖
6、卸料系統(tǒng)部件圖
圖2.23 卸料傳動(dòng)系統(tǒng)部件圖
2.2.6 支架設(shè)計(jì)
機(jī)架采用角鋼,底板用10mm鋼板。
尺寸依結(jié)構(gòu)而定,焊接方式連接。結(jié)構(gòu)簡(jiǎn)圖如圖2.24所示。
圖2.24 支架
2.2.7 絞龍式家用和面機(jī)裝配圖
如圖2.25所示。
圖2.25 和面機(jī)總裝圖
3 小結(jié)
本次畢業(yè)設(shè)計(jì),針對(duì)家用多功能和面機(jī)的設(shè)計(jì)要求,詳細(xì)設(shè)計(jì)了其攪拌裝置、攪拌傳動(dòng)系統(tǒng)、取料傳動(dòng)系統(tǒng)。在整個(gè)設(shè)計(jì)過(guò)程中受益良多,大學(xué)四年的知識(shí)融會(huì)貫通,才有了此次設(shè)計(jì)。也重新深入學(xué)習(xí)了之前掌握的不太好的課程,完善了大學(xué)的知識(shí)結(jié)構(gòu)。各課程之間是互相聯(lián)系,相輔相成的。
以前做事不夠嚴(yán)謹(jǐn),本次設(shè)計(jì)過(guò)程中深切體會(huì)了嚴(yán)謹(jǐn)?shù)闹螌W(xué)態(tài)度對(duì)于設(shè)計(jì)者的重要性。
參考文獻(xiàn)
[1] 盧頌峰. 機(jī)械零件課程設(shè)計(jì)手冊(cè). 北京:中央廣播電視大學(xué)出版社,2001
[2] 王大康 盧頌峰.機(jī)械設(shè)計(jì)課程設(shè)計(jì). 北京工業(yè)大學(xué)出版社,2000
[3] 劉混舉 機(jī)械可靠性設(shè)計(jì) 科學(xué)出版社 2012
[4] 濮良貴 紀(jì)名剛 機(jī)械設(shè)計(jì) 高等教育出版社2012
[5] 余桂英 郭紀(jì)林 AutoCAD 2008基礎(chǔ)教程 大連理工大學(xué)出版社2008
[6] 邵立新 夏素民 孫江宏 Pro/ENGINEER Wildfire 3.0中文版標(biāo)準(zhǔn)教程
清華大學(xué)出版社 2007
致謝
本次畢業(yè)設(shè)計(jì)進(jìn)入尾聲了,由于部分課程基礎(chǔ)不好,本次設(shè)計(jì)我遇到了無(wú)數(shù)的困難,值得慶幸的是都在老師和同學(xué)的幫助下一一解決了。我一直認(rèn)同這句話,別人幫你是你的福分,別人不幫你是別人的本分。所以,謝謝你們,謝謝。
最應(yīng)該感謝的人就是我的指導(dǎo)老師——蔡金平老師,由于基礎(chǔ)差,自然困難也就多。但他始終不厭其煩對(duì)我的疑問(wèn)一一指導(dǎo)??梢哉f(shuō)從開始的確定方案到后面排版,蔡老師真是手把手指點(diǎn)迷津。回憶那段時(shí)光,不經(jīng)有點(diǎn)恍惚,人生有個(gè)良師諍友遠(yuǎn)勝百萬(wàn)雄兵。
還有我要感謝在大學(xué)里教過(guò)我的老師們,是他們把知識(shí)傾囊相授,更教會(huì)了我像個(gè)成年人樣去思考,讓我從大一的懵懂漸漸變得成熟。
雖然本次畢業(yè)設(shè)計(jì)結(jié)束了,我們也要畢業(yè)了,但是人生才剛剛開始。最后祝福老師們,同學(xué)們,朋友們享受人生,享受生活。
31
Robot companion localization at home and in the office
Arnoud Visser J¨urgen Sturm Frans Groen
Intelligent Autonomous Systems, Universiteit van Amsterdam
http://www.science.uva.nl/research/ias/
Abstract
The abilities of mobile robots depend greatly on the performance of basic skills such as
vision and localization. Although great progress has been made to explore and map extensive
public areas with large holonomic robots on wheels, less attention is paid on the localization
of a small robot companion in a confined environment as a room in office or at home. In
this article, a localization algorithm for the popular Sony entertainment robot Aibo inside a
room is worked out. This algorithm can provide localization information based on the natural
appearance of the walls of the room. The algorithm starts making a scan of the surroundings by
turning the head and the body of the robot on a certain spot. The robot learns the appearance
of the surroundings at that spot by storing color transitions at different angles in a panoramic
index. The stored panoramic appearance is used to determine the orientation (including a
confidence value) relative to the learned spot for other points in the room. When multiple
spots are learned, an absolute position estimate can be made. The applicability of this kind of
localization is demonstrated in two environments: at home and in an office.
1 Introduction
1.1 Context
Humans orientate easily in their natural environments. To be able to interact with humans, mobile
robots also need to know where they are. Robot localization is therefore an important basic skill
of a mobile robot, as a robot companion like the Aibo. Yet, the Sony entertainment software
contained no localization software until the latest release1. Still, many other applications for a
robot companion - like collecting a news paper from the front door - strongly depend on fast,
accurate and robust position estimates. As long as the localization of a walking robot, like the
Aibo, is based on odometry after sparse observations, no robust and accurate position estimates
can be expected.
Most of the localization research with the Aibo has concentrated on the RoboCup. At the
RoboCup2 artificial landmarks as colored flags, goals and field lines can be used to achieve localization
accuracies below six centimeters [6, 8].
The price that these RoboCup approaches pay is their total dependency on artificial landmarks
of known shape, positions and color. Most algorithms even require manual calibration of the actual
colors and lighting conditions used on a field and still are quite susceptible for disturbances around
the field, as for instance produced by brightly colored clothes in the audience.
The interest of the RoboCup community in more general solutions has been (and still is) growing
over the past few years. The almost-SLAM challenge3 of the 4-Legged league is a good example of
the state-of-the-art in this community. For this challenge additional landmarks with bright colors
are placed around the borders on a RoboCup field. The robots get one minute to walk around and
explore the field. Then, the normal beacons and goals are covered up or removed, and the robot
must then move to a series of five points on the field, using the information learnt during the first
1Aibo Mind 3 remembers the direction of its station and toys relative to its current orientation
2RoboCup Four Legged League homepage, last accessed in May 2006, http://www.tzi.de/4legged
3Details about the Simultaneous Localization and Mapping challenge can be found at http://www.tzi.de/
4legged/pub/Website/Downloads/Challenges2005.pdf
1
minute. The winner of this challenge [6] reached the five points by using mainly the information of
the field lines. The additional landmarks were only used to break the symmetry on the soccer field.
A more ambitious challenge is formulated in the newly founded RoboCup @ Home league4. In
this challenge the robot has to safely navigate toward objects in the living room environment. The
robot gets 5 minutes to learn the environment. After the learning phase, the robot has to visit 4
distinct places/objects in the scenario, at least 4 meters away from each other, within 5 minutes.
1.2 Related Work
Many researchers have worked on the SLAM problem in general, for instance on panoramic images
[1, 2, 4, 5]. These approaches are inspiring, but only partially transferable to the 4-Legged league.
The Aibo is not equipped with an omni-directional high-quality camera. The camera in the nose
has only a horizontal opening angle of 56.9 degrees and a resolution of 416 x 320 pixels. Further,
the horizon in the images is not a constant, but depends on the movements of the head and legs of
the walking robot. So each image is taken from a slightly different perspective, and the path of the
camera center is only in first approximation a circle. Further, the images are taken while the head
is moving. When moving at full speed, this can give a difference of 5.4 degrees between the top and
the bottom of the image. So the image seems to be tilted as a function of the turning speed of the
head. Still, the location of the horizon can be calculated by solving the kinematic equations of the
robot. To process the images, a 576 Mhz processor is available in the Aibo, which means that only
simple image processing algorithms are applicable. In practice, the image is analyzed by following
scan-lines with a direction relative the calculated horizon. In our approach, multiple sectors above
the horizon are analyzed, with in each sector multiple scan-lines in the vertical direction. One of
the general approaches [3] divides the image in multiple sectors, but this image is omni-directional
and the sector is analyzed on the average color of the sector. Our method analysis each sector on
a different characteristic feature: the frequency of colortransitions.
2 Approach
The main idea is quite intuitive: we would like the robot to generate and store a 360o circular
panorama image of its environment while it is in the learning phase. After that, it should align
each new image with the stored panorama, and from that the robot should be able to derive its
relative orientation (in the localization phase). This alignment is not trivial because the new image
can be translated, rotated, stretched and perspectively distorted when the robot does not stand at
the point where the panorama was originally learned [11].
Of course, the Aibo is not able (at least not in real-time) to compute this alignment on fullresolution
images. Therefore a reduced feature space is designed so that the computations become
tractable5 on an Aibo. So, a reduced circular 360o panorama model of the environment is learned.
Figure 1 gives a quick overview of the algorithm’s main components.
The Aibo performs a calibration phase before the actual learning can start. In this phase the
Aibo first decides on a suitable camera setting (i.e. camera gain and the shutter setting) based
on the dynamic range of brightness in the autoshutter step. Then it collects color pixels by
turning its head for a while and finally clusters these into 10 most important color classes in the
color clustering step using a standard implementation of the Expectation-Maximization algorithm
assuming a Gaussian mixture model [9]. The result of the calibration phase is an automatically
generated lookup-table that maps every YCbCr color onto one of the 10 color classes and can
therefore be used to segment incoming images into its characteristic color patches (see figure 2(a)).
These initialization steps are worked out in more detail in [10].
4RoboCup @ Home League homepage, last accessed in May 2006, http://www.ai.rug.nl/robocupathome/
5Our algorithm consumes per image frame approximately 16 milliseconds, therefore we can easily process images
at the full Aibo frame rate (30fps).
Figure 1: Architecture of our algorithm
(a) Unsupervised learned color segmentation.
(b) Sectors and frequent color transitions
visualized.
Figure 2: Image processing: from the raw image to sector representation. This conversion consumes
approximately 6 milliseconds/frame on a Sony Aibo ERS7.
2.1 Sector signature correlation
Every incoming image is now divided into its corresponding sectors6. The sectors are located above
the calculated horizon, which is generated by solving the kinematics of the robot. Using the lookup
table from the unsupervised learned color clustering, we can compute the sector features by counting
per sector the transition frequencies between each two color classes in vertical direction. This yields
the histograms of 10x10 transition frequencies per sector, which we subsequently discretize into 5
logarithmically scaled bins. In figure 2(b) we displayed the most frequent color transitions for each
sector. Some sectors have multiple color transitions in the most frequent bin, other sectors have a
single or no dominant color transition. This is only visualization; not only the most frequent color
transitions, but the frequency of all 100 color transitions are used as characteristic feature of the
sector.
In the learning phase we estimate all these 80x(10x10) distributions7 by turning the head and
body of the robot. We define a single distribution for a currently perceived sector by
Pcurrent (i, j, bin) =
_
1 discretize (freq (i, j)) = bin
0 otherwise
(1)
where i, j are indices of the color classes and bin one of the five frequency bins. Each sector is
seen multiple times and the many frequency count samples are combined into a distribution learned
680 sectors corresponding to 360o; with an opening angle of the Aibo camera of approx. 50o, this yields between
10 and 12 sectors per image (depending on the head pan/tilt)
7When we use 16bit integers, a complete panorama model can be described by (80 sectors)x(10 colors x 10
colors)x(5 bins)x(2 byte) = 80 KB of memory.
for that sector by the equation:
Plearned (i, j, bin) = Pcountsector (i, j, bin)
bin2frequencyBins
countsector (i, j, bin)
(2)
After the learning phase we can simply multiply the current and the learned distribution to get
the correlation between a currently perceived and a learned sector:
Corr(Pcurrent, Plearned) =
Y
i,j2colorClasses,
bin2frequencyBins
Plearned (i, j, bin) ·Pcurrent (i, j, bin) (3)
2.2 Alignment
After all the correlations between the stored panorama and the new image signatures were evaluated,
we would like to get an alignment between the stored and seen sectors so that the overall likelihood
of the alignment becomes maximal. In other words, we want to find a diagonal path with the
minimal cost through the correlation matrix. This minimal path is indicated as green dots in figure
3. The path is extended to a green line for the sectors that are not visible in the latest perceived
image.
We consider the fitted path to be the true alignment and extract the rotational estimate 'robot
from the offset from its center pixel to the diagonal (_sectors):
?'robot =
360_
80
_sectors (4)
This rotational estimate is the difference between the solid green line and the dashed white line
in figure 3, indicated by the orange halter. Further, we try to estimate the noise by fitting again a
path through the correlation matrix far away from the best-fitted path.
SNR =
P
(x,y)2minimumPath
Corr(x, y)
P
(x,y)2noisePath
Corr(x, y)
(5)
The noise path is indicated in figure 3 with red dots.
(a) Robot standing on the trained spot (matching
line is just the diagonal)
(b) Robot turned right by 45 degrees (matching
line displaced to the left)
F igure 3: Visualization of the alignment step while the robot is scanning with its head. The
green solid line marks the minimum path (assumed true alignment) while the red line marks the
second-minimal path (assumed peak noise). The white dashed line represents the diagonal, while
the orange halter illustrates the distance between the found alignment and the center diagonal
(_sectors).
2.3 Position Estimation with Panoramic Localization
The algorithm described in the previous section can be used to get a robust bearing estimate
together with a confidence value for a single trained spot. As we finally want to use this algorithm
to obtain full localization we extended the approach to support multiple training spots. The
main idea is that the robot determines to which amount its current position resembles with the
previously learned spots and then uses interpolation to estimate its exact position. As we think
that this approach could also be useful for the RoboCup @ Home league (where robot localization
in complex environments like kitchens and living rooms is required) it could become possible that
we finally want to store a comprehensive panorama model library containing dozens of previously
trained spots (for an overview see [1]).
However, due to the computation time of the feature space conversion and panorama matching,
per frame only a single training spot and its corresponding panorama model can be selected.
Therefore, the robot cycles through the learned training spots one-by-one. Every panorama model
is associated with a gradually changed confidence value representing a sliding average on the confidence
values we get from the per-image matching.
After training, the robot memorizes a given spot by storing the confidence values received from
the training spots. By comparing a new confidence value with its stored reference, it is easy to
deduce whether the robot stands closer or farther from the imprinted target spot.
We assume that the imprinted target spot is located somewhere between the training spots.
Then, to compute the final position estimate, we simply weight each training spot with its normalized
corresponding confidence value:
positionrobot =
X
i
positioni
Pconfidencei
j confidencej
(6)
This should yield zero when the robot is assumed to stand at the target spot or a translation
estimate towards the robot’s position when the confidence values are not in balance anymore.
To prove the validity of this idea, we trained the robot on four spots on regular 4-Legged field
in our robolab. The spots were located along the axes approximately 1m away from the center.
As target spot, we simply chose the center of the field. The training itself was performed fully
autonomously by the Aibo and took less than 10 minutes. After training was complete, the Aibo
walked back to the center of the field. We recorded the found position and kidnapped the robot to
an arbitrary position around the field and let it walk back again.
Please be aware that our approach for multi-spot localization is at this moment rather primitive
and has to be only understood as a proof-of-concept. In the end, the panoramic localization data
from vision should of course be processed by a more sophisticated localization algorithm, like a
Kalman or particle filter (last not least to incorporate movement data from the robot).
3 Results
3.1 Environments
We selected four different environments to test our algorithm under a variety of circumstances. The
first two experiments were conducted at home and in an office environment8 to measure performance
under real-world circumstances. The experiments were performed on a cloudy morning, sunny
afternoon and late in the evening. Furthermore, we conducted exhaustive tests in our laboratory.
Even more challenging, we took an Aibo outdoors (see [7]).
3.2 Measured results
Figure 4(a) illustrates the results of a rotational test in a normal living room. As the error in the
rotation estimates ranges between -4.5 and +4.5 degrees, we may assume an error in alignment of
a single sector9; moreover, the size of the confidence interval can be translated into maximal two
sectors, which corresponds to the maximal angular resolution of our approach.
8XX office, DECIS lab, Delft
9full circle of 3600 divided by 80 sectors
(a) Rotational test in natural environment (living
room, sunny afternoon)
(b) Translational test in natural environment (child’s
room, late in the evening)
Figure 4: Typical orientation estimation results of experiments conducted at home. In the rotational
experiment on the left the robot is rotated over 90 degrees on the same spot, and every 5 degrees its
orientation is estimated. The robot is able to find its true orientation with an error estimate equal
to one sector of 4.5 degrees. The translational test on the right is performed in a child’s room. The
robot is translated over a straight line of 1.5 meter, which covers the major part of the free space
in this room. The robot is able to maintain a good estimate of its orientation; although the error
estimate increases away from the location where the appearance of the surroundings was learned.
Figure 4(b) shows the effects of a translational dislocation in a child’s room. The robot was
moved onto a straight line back and forth through the room (via the trained spot somewhere in the
middle). The robot is able to estimate its orientation quite well on this line. The discrepancy with
the true orientation is between +12.1 and -8.6 degrees, close to the walls. This is also reflected in
the computed confidence interval, which grows steadily when the robot is moved away from the
trained spot. The results are quite impressive for the relatively big movements in a small room and
the resulting significant perspective changes in that room.
Figure 5(a) also stems from a translational test (cloudy morning) which has been conducted in
an office environment. The free space in this office is much larger than at home. The robot was
moved along a 14m long straight line to the left and right and its orientation was estimated. Note
the error estimate stays low at the right side of this plot. This is an artifact which nicely reflects
the repetition of similarly looking working islands in the office.
In both translational tests it can be seen intuitively that the rotation estimates are within
acceptable range. This can also be shown quantitatively (see figure 5(b)): both the orientation
error and the confidence interval increase slowly and in a graceful way when the robot is moved
away from the training spot.
Finally, figure 6 shows the result of the experiment to estimate the absolute position with multiple
learned spots. It can be seen that the localization is not as accurate as traditional approaches,
but can still be useful for some applications (bearing in mind that no artificial landmarks are required).
We recorded repeatedly a derivation to the upper right that we think can be explained by
the fact that different learning spots don’t produce equally strong confidence values; we believe to
be able to correct for that by means of confidence value normalization in the near future.
4 Conclusion
Although at first sight the algorithm seems to rely on specific texture features of the surrounding
surfaces, in practice no dependency could be found. This can be explained by two reasons: firstly, as
the (vertical) position of a color transition is not used anyway, the algorithm is quite robust against
(vertical) scaling. Secondly, as the algorithm aligns on many color transitions in the background
(typically more than a hundred in the same sector), the few color transitions produced by objects
in the foreground (like beacons and spectators) have a minor impact on the match (because their
sizes relative to the background are comparatively small).
The lack of an accurate absolute position estimates seems to be a clear drawback with respect to
the other methods, but bearing information alone can already be very useful for certain applications.
(a) Translational test i
收藏