論文の概要、ライセンス

# (参考訳) 時系列予測のための効率的な自動ディープラーニング [全文訳有]

Efficient Automated Deep Learning for Time Series Forecasting ( http://arxiv.org/abs/2205.05511v2 )

ライセンス: CC BY 4.0
Difan Deng, Florian Karl, Frank Hutter, Bernd Bischl, Marius Lindauer(参考訳) 近年、自動機械学習(automl)、特に自動ディープラーニング(autodl)システムの効率が大幅に向上しているが、最近の研究は表型、画像、nlpのタスクに焦点を当てている。 これまでのところ、こうしたタスクに異なる新しいアーキテクチャを適用する大きな成功にもかかわらず、時系列予測のための一般的なAutoDLフレームワークにはほとんど注意が払われていない。 本稿では,時系列予測のためのデータ処理パイプライン全体のニューラルアーキテクチャとハイパーパラメータの協調最適化のための効率的な手法を提案する。 一般的なnas検索空間とは対照的に、我々は様々な最先端アーキテクチャをカバーする新しいニューラルネットワーク検索空間を設計し、様々なdlアプローチで効率的なマクロ検索を可能にした。 このような大きな構成空間を効率的に探索するために,マルチフィデリティ最適化を伴うベイズ最適化を用いる。 異なる予測データセット上で効率の良い多重忠実度最適化を実現するために,様々な予算タイプを実証的に検討する。 さらに,提案システムであるauto-pytorch-tsをいくつかの確立されたベースラインと比較し,複数のデータセットで比較した。

Recent years have witnessed tremendously improved efficiency of Automated Machine Learning (AutoML), especially Automated Deep Learning (AutoDL) systems, but recent work focuses on tabular, image, or NLP tasks. So far, little attention has been paid to general AutoDL frameworks for time series forecasting, despite the enormous success in applying different novel architectures to such tasks. In this paper, we propose an efficient approach for the joint optimization of neural architecture and hyperparameters of the entire data processing pipeline for time series forecasting. In contrast to common NAS search spaces, we designed a novel neural architecture search space covering various state-of-the-art architectures, allowing for an efficient macro-search over different DL approaches. To efficiently search in such a large configuration space, we use Bayesian optimization with multi-fidelity optimization. We empirically study several different budget types enabling efficient multi-fidelity optimization on different forecasting datasets. Furthermore, we compared our resulting system, dubbed Auto-PyTorch-TS, against several established baselines and show that it significantly outperforms all of them across several datasets.
公開日: Fri, 13 May 2022 08:55:32 GMT

※ 翻訳結果を表に示しています。PDFがオリジナルの論文です。翻訳結果のライセンスはCC BY-SA 4.0です。詳細はトップページをご参照ください。

翻訳結果

    Page: /      
英語(論文から抽出)日本語訳スコア
EfficientAutomatedDeepLe arningforTimeSeriesF orecastingDifanDeng1 ,FlorianKarl2,3,Fran kHutter4,5,BerndBisc hl2,3,andMariusLinda uer11LeibnizUniversi tyHannover2Ludwig-Ma ximilianUniversity,M unich3FraunhoferInst ituteforIntegratedCi rcuits(IIS),Erlangen ,Germany4Universityo fFreiburg5BoschCente rforArtificialIntelligenceAbst ract.Recentyearshave witnessedtremendousl yimprovedeffi-ciencyofAutomatedMa chineLearning(AutoML ),especiallyAutomate dDeepLearning(AutoDL )systems,butrecentwo rkfocusesontabular,i mage,orNLPtasks.Sofa r,littleattentionhas beenpaidtogeneralAut oDLframeworksfortime seriesforecasting,de spitetheenormoussucc essinapplyingdifferentnovelarchitectu restosuchtasks.Inthi spa-per,weproposeane fficientapproachforthej ointoptimizationofne uralarchitectureandh yperparametersofthee ntiredataprocessingp ipelinefortimeseries forecasting.Incontra sttocommonNASsearchs paces,wedesignedanov elneuralarchitecture searchspacecoveringv ariousstate-of-the-a rtarchitectures,allo wingforanefficientmacro-searchove rdif-ferentDLapproac hes.Toefficientlysearchinsucha largeconfigurationspace,weuseB ayesianoptimizationw ithmulti-fidelityoptimization.W eempiricallystudysev eraldifferentbudgettypesenab lingefficientmulti-fidelityoptimizationon differentforecastingdata sets.Furthermore,wec omparedourresultings ystem,dubbedAuto-PyT orch-TS,againstsev-e ralestablishedbaseli nesandshowthatitsign ificantlyoutperformsall ofthemacrossseverald atasets.Keywords:Aut oML,DeepLearning,Tim eSeriesForecasting,N euralArchi-tectureSe arch1IntroductionTim eseries(TS)forecasti ngplaysakeyroleinman ybusinessandindustri alproblems,becausean accurateforecastingm odelisacrucialpartof adata-drivendecision -makingsystem.Previo usforecastingapproac hesmainlyconsidereac hindividualtimeserie sasonetaskandcreatea localmodel[3,7,26]. EfficientAutomatedDeepLe arningforTimeSeriesF orecastingDifanDeng1 ,FlorianKarl2,3,Fran kHutter4,5,BerndBisc hl2,3,andMariusLinda uer11LeibnizUniversi tyHannover2Ludwig-Ma ximilianUniversity,M unich3FraunhoferInst ituteforIntegratedCi rcuits(IIS),Erlangen ,Germany4Universityo fFreiburg5BoschCente rforArtificialIntelligenceAbst ract.Recentyearshave witnessedtremendousl yimprovedeffi-ciencyofAutomatedMa chineLearning(AutoML ),especiallyAutomate dDeepLearning(AutoDL )systems,butrecentwo rkfocusesontabular,i mage,orNLPtasks.Sofa r,littleattentionhas beenpaidtogeneralAut oDLframeworksfortime seriesforecasting,de spitetheenormoussucc essinapplyingdifferentnovelarchitectu restosuchtasks.Inthi spa-per,weproposeane fficientapproachforthej ointoptimizationofne uralarchitectureandh yperparametersofthee ntiredataprocessingp ipelinefortimeseries forecasting.Incontra sttocommonNASsearchs paces,wedesignedanov elneuralarchitecture searchspacecoveringv ariousstate-of-the-a rtarchitectures,allo wingforanefficientmacro-searchove rdif-ferentDLapproac hes.Toefficientlysearchinsucha largeconfigurationspace,weuseB ayesianoptimizationw ithmulti-fidelityoptimization.W eempiricallystudysev eraldifferentbudgettypesenab lingefficientmulti-fidelityoptimizationon differentforecastingdata sets.Furthermore,wec omparedourresultings ystem,dubbedAuto-PyT orch-TS,againstsev-e ralestablishedbaseli nesandshowthatitsign ificantlyoutperformsall ofthemacrossseverald atasets.Keywords:Aut oML,DeepLearning,Tim eSeriesForecasting,N euralArchi-tectureSe arch1IntroductionTim eseries(TS)forecasti ngplaysakeyroleinman ybusinessandindustri alproblems,becausean accurateforecastingm odelisacrucialpartof adata-drivendecision -makingsystem.Previo usforecastingapproac hesmainlyconsidereac hindividualtimeserie sasonetaskandcreatea localmodel[3,7,26]. 0.06
Inrecentyears,withgr owingdatasetsizeandt heascentofDeepLearni ng(DL),researchinter estshaveshiftedtoglo balforecastingmodels thatareabletolearnin forma-tionacrossallt imeseriesinadatasetc ollectedfromsimilars ources[20,41]. Researchestshaveshif tedtoglobalforecasti ngmodels thatarearetolearninf orma-tionacrossallti meseriesinadatasetco llectedfromsimilarso urces[20,41]。 0.09
Giventhestrongabilit yofDLmodelstolearnco mplexfeaturerepresen tationsfromaarXiv:22 05.05511v2 [cs.LG] 13 May 2022 2022年5月13日 AarXiv:2205.05511v2 [cs.LG] からのDLmodelstolearncompl exfeaturerepresentat ions 0.20
英語(論文から抽出)日本語訳スコア
2D.Dengetal.largeamo untofdata,thereisagr owingtrendofapplying newDLmodelstofore-ca stingtasks[38,46,50,56]. 2D. Dengetal.largeamount ofdata,thereisa Growthtrendofapplyin gnewDLmodelstofore-c astingtasks[38,46,50,56] 0.13
Automatedmachinelear ning(AutoML)addresse stheneedofchoosingth earchitectureanditsh yperparametersdepend ingonthetaskathandto achievepeakpredictiv eperformance.Theform erisformalizedasneur alarchitecturesearch (NAS)[14]andthelatterashyperp arameteropti-mizatio n(HPO)[17]. Automatedmachinelear ning (AutoML)addressesthe needofchoosingthearc hitectureanditshyper parametersdependingo nthetaskathandtoachi evepeakpredictive Performance. Theformerisformalize dasneuralarchitectur esearch (NAS)[14]andthelatterashyperp arameteropti-mizatio n (HPO)[17] 0.08
Severaltechniquesfro mthefieldsofNASandHPOhaveb eensuccessfullyappli edtotabularandimageb enchmarks[15,18,33,61]. imagebenchmarks[15,18,33,61]. 0.17
Recentworkshavealsos hownthatjointlyoptim izingbothproblemspro videssuperiormodelst hatbettercapturetheu nderlyingstructureof thetargettask[60,61]. テーターゲッタスクの地下構造に就て 0.17
Althoughtheprinciple ideaofapplyingAutoML totimeseriesforecast -ingmodelsisverynatu ral,thereareonlyfewp riorapproachesaddres singthis[32,37,43,52]. オートMLtotimeseriesforeca st-ingmodelsisveryna tural,thereareonlyfe wpriorapproachesaddr essingthis[32,37,43,52] 0.10
Infact,combiningstat e-of-the-artAutoMLme thods,suchasBayesian Optimizationwithmult i-fidelityoptimization[16,30,34,36],withstate-of-the-ar ttimeseriesforecasti ngmodelsleadstosever alchallengesweaddres sinthispaper.First,r ecentapproachesforNA Smainlycovercellsear chspaces,al-lowingon lyforaverylimiteddes ignspace,thatdoesnot supportdifferentmacrodesigns[12,59]. Infact,combiningstat e-of-the-artAutoMLme thods,suchasBayesian Optimizationwithmult i-fidelityoptimizati on[16,30,34,36],withstate-of-arttim eseriesforecastingmo delsleadstoseveralch allengesweaddressint hispaper.First,recen tapproachesforNASmai nlycovercellsearchsp aces,al-lowingonlyfo raverylimiteddesigns pace,thatdoesnot supporteddifferentma crodesigns[12,59] 0.08
Ourgoalistosearchove ralargevarietyofdifferentarchitecturesco veringstate-of-the-a rtideas.Second,evalu atingDLmodelsfortime seriesfore-castingis fairlyexpensiveandam achinelearningpracti cionermaynotbeableto affordmanymodelevaluati ons.Multi-fidelityoptimization,e g [36],waspro-posedtoallev iatethisproblembyonl yallocatingafraction oftheresourcestoeval uatedconfigurations(lowfidelity)andpromotingt hemostpromisingcon-figurationstogivethema dditionalresources(h igherfidelity). Ourgoalistosearchove rargeietyofdifferent architecturescoverin gstate-of-the-artide as.Second,evaluating DLmodelsfortimeserie sfore-castingisfairl yexpensiveandamachin elearningpracticione rmaynotbeabletoaffor dmanymodelevaluation s.Multi-fidelityopti mization,e g [36],waspro-posedtoallev iatethisproblembyonl yallocatingafraction oftheresourcestoeval uatedconfigurations( lowfidelity) andpromotingthemostp romisingcon-figurati onstogivethemadditio nalresources(highfid elity) 0.05
Third,asaconse-quenc eofapplyingmulti-fidelityoptimization,w ehavetochoosehowdifferentfidelitiesaredefined,i.e.whatkindofbu dgetisusedandhowmuch isallocated.Examples forsuchbudgettypesar enumberofepochs,data setsizeortimeseriesl ength.Dependingonthe correlationbetweenlo werandhighestfidelity,multi-fidelityoptimizationca nboosttheefficiencyofAutoMLgreatl yorevenslowitdownint heworstcase.Sincewea rethefirsttoconsidermulti-fidelityop-timizationf orAutoMLontimeseries forecasting,westudie dtheefficiencyofdifferentbudgettypesacro ssmanydatasets.Fourt h,alloftheseneedtobe puttogether;tothateffect,weproposeanewope n-sourcepackageforAu tomatedDeepLearning( AutoDL)fortimeseries forecasting,dubbedAu to-PyTorch-TS.6Speci fically,ourcontributio nsareasfollows:1.Wep roposetheAutoDLframe workAuto-PyTorch-TSt hatisabletojointlyop timizethearchitectur eandthecorresponding hyperparametersforag ivendatasetfortimese riesforecasting.2.We presentaunifiedarchitectureconfigurationspacethatcon tainsseveralstate-of -the-artforecastinga rchitectures,allowin gforaflexibleandpowerfulmac ro-search.3.Weprovid einsightsintotheconfigurationspaceofAuto- PyTorch-TSbystudying themostimportantdesi gndecisionsandshowth atdifferentarchi-tecturesa rereasonablefordifferentdatasets.6Theco deisavailableunderht tps://github.com/den gdifan/Auto-PyTorch/ tree/forecasting. Third,asaconse-quenc eofapplyingmulti-fidelityoptimization,w ehavetochoosehowdifferentfidelitiesaredefined,i.e.whatkindofbu dgetisusedandhowmuch isallocated.Examples forsuchbudgettypesar enumberofepochs,data setsizeortimeseriesl ength.Dependingonthe correlationbetweenlo werandhighestfidelity,multi-fidelityoptimizationca nboosttheefficiencyofAutoMLgreatl yorevenslowitdownint heworstcase.Sincewea rethefirsttoconsidermulti-fidelityop-timizationf orAutoMLontimeseries forecasting,westudie dtheefficiencyofdifferentbudgettypesacro ssmanydatasets.Fourt h,alloftheseneedtobe puttogether;tothateffect,weproposeanewope n-sourcepackageforAu tomatedDeepLearning( AutoDL)fortimeseries forecasting,dubbedAu to-PyTorch-TS.6Speci fically,ourcontributio nsareasfollows:1.Wep roposetheAutoDLframe workAuto-PyTorch-TSt hatisabletojointlyop timizethearchitectur eandthecorresponding hyperparametersforag ivendatasetfortimese riesforecasting.2.We presentaunifiedarchitectureconfigurationspacethatcon tainsseveralstate-of -the-artforecastinga rchitectures,allowin gforaflexibleandpowerfulmac ro-search.3.Weprovid einsightsintotheconfigurationspaceofAuto- PyTorch-TSbystudying themostimportantdesi gndecisionsandshowth atdifferentarchi-tecturesa rereasonablefordifferentdatasets.6Theco deisavailableunderht tps://github.com/den gdifan/Auto-PyTorch/ tree/forecasting. 0.04
英語(論文から抽出)日本語訳スコア
EfficientAutomatedDeepLe arningforTimeSeriesF orecasting34.Weshowt hatAuto-PyTorch-TSis abletooutperformaset ofwell-knowntraditio nallocalmodelsandmod erndeeplearningmodel swithanaveragerelati veerrorreductionof19 %againstthebestbasel ineacrossmanyfore-ca stingdatasets.2Relat edWorkWestartbydiscu ssingthemostcloselyr elatedworkinDLfortim eseriesfore-casting, AutoDL,andAutoMLfort imeseriesforecasting .2.1DeepLearningbase dForecastingEarlywor konforecastingfocuse donbuildingalocalmod elforeachindividuals eriestopredictfuture trends,ignoringtheco rrelationbetweendifferentseries.Incontra st,globalforecasting modelsareabletocaptu reinformationofmulti pletimeseriesinadata setandusethisatpredi ctiontime[31]. EfficientAutomatedDeepLe arningforTimeSeriesF orecasting34.Weshowt hatAuto-PyTorch-TSis abletooutperformaset ofwell-knowntraditio nallocalmodelsandmod erndeeplearningmodel swithanaveragerelati veerrorreductionof19 %againstthebestbasel ineacrossmanyfore-ca stingdatasets.2Relat edWorkWestartbydiscu ssingthemostcloselyr elatedworkinDLfortim eseriesfore-casting, AutoDL,andAutoMLfort imeseriesforecasting .2.1DeepLearningbase dForecastingEarlywor konforecastingfocuse donbuildingalocalmod elforeachindividuals eriestopredictfuture trends,ignoringtheco rrelationbetweendifferentseries.Incontra st,globalforecasting modelsareabletocaptu reinformationofmulti pletimeseriesinadata setandusethisatpredi ctiontime[31]. 0.02
Withgrowingdatasetsi zeandavailabilityofm ultipletimeseriesfro msimilarsources,this becomesincreasinglya ppealingoverlocalmod els.However,empirica lexperimentsshowthat localstatisticalmode lscanremaincompetiti veandcannotbesimplyo ut-performedbyglobal machinelearningandDL models[40]. マルチプルタイムの可利用性向上にともなう地域モデルの構築 0.35
Simplefeed-forwardML Pshavebeenusedfortim eseriesforecastingan dextendedtomorecompl exmodels.Forexample, theN-BEATSframework[46]iscomposedofmultiple stacks,eachconsistin gofseveralblocks.Thi sarchitecturalchoice alignswiththemainpri ncipleofmodernarchit ecturedesign:Network sshouldbedesignedina block-wisemannerinst eadoflayer-wise[62]. Simplefeed-forwardML Pshavebeenusedfortim eseriesforecastingan dextendedtomorecompl exmodels.Forexample, theN-BEATSframework[46]iscomposedofmultiple stacks,eachconsistin gofseveralblocks.Thi sarchitecturalchoice alignswiththemainpri ncipleof modernarchitecturede sign:Networksshouldb edesignedinablock-wi semannerinsteadoflay er-wise[62] 0.08
Additionally,RNNs[9,23]wereproposedtoproces ssequentialdataandth ustheyaredirectlyapp licabletotimeseriesf orecasting[22,56]. さらに、RNNs[9,23]はプロプロファイド・ト・シークエンシャル・データとthustheyaredirectlya pplicable-totimeseri esforecasting[22,56]である。 0.36
AtypicalRNN-basedmod elistheSeq2Seqnetwor k[9]thatcontainsanRNNenc oderanddecoder.Wenet al. AtypicalRNN-basedmod elistheSeq2Seqnetwor k[9] that ContainsanRNNencoder anddecoder.Wenetal 0.20
[56]furtherreplacedtheSe q2Seq’sRNNdecoderwithamult i-headMLP.Flunkertet al. 56]Seq2SeqのRNNdecoderwithamulti -headMLP.Flunkerteta lに置き換える。 0.53
[50]proposedDeepARthatwr apsanRNNencoderasana uto-regressivemodela ndusesittoiterativel ygeneratenewsamplepo intsbasedonsampledtr ajectoriesfromthelas ttimestep.Incontrast ,CNNscanextractlocal ,spatially-invariant relation-ships.Simil arly,timeseriesdatam ayhavetime-invariant relationships,whichm akesCNN-basedmodelss uitablefortimeseries tasks,e g WaveNet[6,45]andTemporalConvoluti onNetworks(TCN)[4]. 同様に、timeseriesdatamayhav etime-invariantrelat ionships,whichmakesC NN-basedmodels suitablefortimeserie stasks,e gWaveNet[6,45]andTemporalConvoluti onNetworks(TCN)[4] 0.41
SimilartoRNNs,CNNsco uldalsobewrappedbyan auto-regressivemodel torecursivelyforecas tfuturetar-gets[6,45]. 同様に、CNNscould alsobewrappedbyanaut o-regressivemodeltor ecursivelyforecastfu turetar-gets[6,45]。 0.11
Lastbutnotleast,atte ntionmechanismsandtr ansformershaveshowns uperiorperformanceov erRNNsonnaturallangu ageprocessingtasks[55]andoverCNNsoncompute rvisiontasks[13]. ラストノトレスト,アテンションメカリズムとトランスフォーマーシェーブシュウンシュプレオールパフォーマンスオーバRNNson自然言語プロセッシングタスク[55]およびオーバーCNNson Computervisiontask[13]。 0.41
TransformersandRNNsc analsobecombined;e g Limetal. トランスフォーマーとリメタルも組み合わさる。 0.60
[38]proposedtemporalfusi ontransformers(TFT)t hatstackatransformer layerontopofanRNNtoc ombinethebestoftwowo rlds. [38]proposedtemporalfusi ontransformers(TFT)t hatstackatransformer layerontopofanRNNtoc ombinethebestoftwowo rlds。 0.09
英語(論文から抽出)日本語訳スコア
4D.Dengetal.2.2Autom atedDeepLearning(Aut oDL)State-of-the-art AutoMLapproachesincl udeBayesianOptimizat ion(BO)[18],EvolutionaryAlgorit hms(EA)[44],reinforcementlearni ng[62]orensembles[15]. 4D.Dengetal.2.2Autom atedDeepLearning(Aut oDL)State-of-the-art AutoMLapproachesincl udeBayesianOptimizat ion(BO)[18],EvolutionaryAlgorit hms(EA)[44],reinforcementlearni ng[62]orensembles[15]。 0.28
Mostofthemconsiderth eunderlyingAutoMLpro cessasacombinedalgo- rithmselectionandhyp erparameter(CASH)pro blem[53],i.e.,theoptimizerse lectsthemostpromisin galgorithmsandthenop timizesfortheiroptim alhy-perparametercon figurations.NeuralArch itectureSearch(NAS), ontheotherhand,onlyc ontainsonesearchspac e:itsarchitecture.NA Saimsatfindingtheoptimalarchi tectureforthegiventa skwithafixedsetofhyperparamet ers.Similartothetrad itionalapproach,thea rchitecturecouldbeop timizedwithBO[33,61],EA[49]orReinforcementLearn ing[62]amongothers,butthere alsoexistmanyNAS-spe cificspeeduptechniques,s uchasone-shotmodels[58]andzero-costproxies[1]. Mostofthemconsiderth eunderlyingAutoMLpro cessasacombinedalgo- rithmselectionandhyp erparameter(CASH)pro blem[53],i.e.,theoptimizerse lectsthemostpromisin galgorithmsandthenop timizesfortheiroptim alhy-perparametercon figurations.NeuralArch itectureSearch(NAS), ontheotherhand,onlyc ontainsonesearchspac e:itsarchitecture.NA Saimsatfindingtheoptimalarchi tectureforthegiventa skwithafixedsetofhyperparamet ers.Similartothetrad itionalapproach,thea rchitecturecouldbeop timizedwithBO[33,61],EA[49]orReinforcementLearn ing[62]amongothers,butthere alsoexistmanyNAS-spe cificspeeduptechniques,s uchasone-shotmodels[58]andzero-costproxies[1]. 0.14
Inthisworkwefollowth estate-of-the-artapp roachfromAuto-PyTorc h[61]andsearchforboththeo ptimalarchitecturean ditshyperparametersw ithBO.Trainingadeepn euralnetworkrequires lotsofcom-puteresour ces.Multi-fidelityoptimization[16,30,36]isacommonapproachtoa ccelerateAutoMLandAu toDL.Itpreventstheop timizerfrominvesting toomanyresourcesonth epoorlyperformingcon figurationsandallowsfo rspendingmoreonthemo stpromisingones.Howe ver,thecorrelationbe tweendifferentfidelitiesmightbeweak[59]forDLmodels,inwhichc asetheresultonalower fidelitywillprovidelit tleinformationfortho seonhigherfidelities.Thus,itisan openquestionhowtopro perlyselectthebudget typeforagiventargett ask,andresearchersof tenreverttoapplicati on-specificdecisions.2.3AutoML forTimeSeriesForecas tingWhileautomaticfo recastinghasbeenofin terestintheresearchc ommunityinthepast[28],dedicatedAutoMLappr oachesfortimeseriesf orecastingproblemsha veonlybeenexploredre cently[21,32,35,42,51]. Inthisworkwefollowth estate-of-the-artapp roachfromAuto-PyTorc h[61]andsearchforboththeo ptimalarchitecturean ditshyperparametersw ithBO.Trainingadeepn euralnetworkrequires lotsofcom-puteresour ces.Multi-fidelityoptimization[16,30,36]isacommonapproachtoa ccelerateAutoMLandAu toDL.Itpreventstheop timizerfrominvesting toomanyresourcesonth epoorlyperformingcon figurationsandallowsfo rspendingmoreonthemo stpromisingones.Howe ver,thecorrelationbe tweendifferentfidelitiesmightbeweak[59]forDLmodels,inwhichc asetheresultonalower fidelitywillprovidelit tleinformationfortho seonhigherfidelities.Thus,itisan openquestionhowtopro perlyselectthebudget typeforagiventargett ask,andresearchersof tenreverttoapplicati on-specificdecisions.2.3AutoML forTimeSeriesForecas tingWhileautomaticfo recastinghasbeenofin terestintheresearchc ommunityinthepast[28],dedicatedAutoMLappr oachesfortimeseriesf orecastingproblemsha veonlybeenexploredre cently[21,32,35,42,51]. 0.04
Optimizationmethodss uchasgeneticalgorith ms[10],montecarlotreesearc handalgorithmsakinto multi-fidelityoptimization[51]havebeenusedamongoth ers.Paldinoetal. 最適化手法such asgeneticalgorithms[10],montecarlotreesearc handalgorithmsakinto multi-fidelityoptimi zation[51]havebeenusedamongoth ers.paldinoetal 0.18
[47]showedthatAutoMLfram eworksnotintendedfor timeseriesforecastin goriginally-incombin ationwithfeatureengi neering-werenotablet osignificantlyoutperformsimp leforecastingstrateg ies;asimilarapproachispr esentedin[10]. [47]AutoMLframeworksnoti ntendedfortimeseries forecastingの元来は、featureengineering-w erenotabletosignific antlyoutperformsimpl eforecastingstrategi es;asimilarapproachispr esentedin[10] である。 0.13
AspartofareviewofAut oMLforforecastingpip elines,Meisenbachere tal. AspartofareviewofAut oMLforforecastingpip elines, Meisenbacheretal 0.10
[42]concludedthatthereis aneedforoptimizingth eentirepipelineasexi stingworkstendtoonly focusoncertainparts. Wetookalloftheseinto accountbyproposingAu to-PyTorch-TSasafram eworkthatisspecificallydesignedtooptim izeoveraflexibleandpowerfulcon figurationspaceofforec astingpipelines.3Aut oPyTorchForecastingF ordesigninganAutoMLs ystem,weneedtoconsid erthefollowingcompon ents:optimizationtar gets,configurationspaceandopti mizationalgorithm.Th ehigh-levelworkflowofourAuto-PyTorch- TSframeworkisshownin Figure1;inmany 42]concludedthereisanee doptimizingtheentire pipelineasexistingwo rkstendtoonlyfocuson certainparts.wetooka lltheseintoaccountby proposingauto-pytorc h-tsasaframeworkthat isspecificlydesigned tooptimizeoveraflexi bleandpowerfulconfig urationspaceofforeca stingpipelines.3auto pytorchforecastingfo rdesigninganautomlsy stem,weneedtoconside rthefollowingcompone nts:optimizationtarg ets,configurationspa ceandoptimizationalg orithm.thehigh-level workflowofourauto-py torch-tsframeworkiss howning.1;inmany 0.03
英語(論文から抽出)日本語訳スコア
EfficientAutomatedDeepLe arningforTimeSeriesF orecasting5TrainingD atasetTaskSetupHyper parameterOptimizerPi pelineEvaluationEnse mbleForecastingTestS eriesConfi-gurationV alidationLossFig.1:A noverviewofAuto-PyTo rch-TS.Givenadataset ,Auto-PyTorch-TSauto maticallypreparesthe datatofittherequirementofafo recastingpipeline.Th eAutoMLoptimizerwill thenusetheselectedbu dgettypetosearchford e-sirableneuralarchi tecturesandhyperpara metersfromthepipelin econfigurationspace.Finall y,wecreateanensemble outofthemostpromisin gpipelinestodothefinalforecastingonthet estsets.waysitfuncti onssimilartoexisting state-of-the-artAuto MLframeworks[17,61]. EfficientAutomatedDeepLe arningforTimeSeriesF orecasting5TrainingD atasetTaskSetupHyper parameterOptimizerPi pelineEvaluationEnse mbleForecastingTestS eriesConfi-gurationV alidationLossFig.1:A noverviewofAuto-PyTo rch-TS.Givenadataset ,Auto-PyTorch-TSauto maticallypreparesthe datatofittherequirementofafo recastingpipeline.Th eAutoMLoptimizerwill thenusetheselectedbu dgettypetosearchford e-sirableneuralarchi tecturesandhyperpara metersfromthepipelin econfigurationspace.Finall y,wecreateanensemble outofthemostpromisin gpipelinestodothefinalforecastingonthet estsets.waysitfuncti onssimilartoexisting state-of-the-artAuto MLframeworks[17,61]. 0.02
Tobetterbeabletoexpl ainuniquedesignchoic efortimeseriesforeca sting,wefirstpresentaformalsta tementoftheforecasti ngproblemanddiscussc hallengesinevaluatin gforecastingpipeline sbeforedescribingthe componentsindetail.3 .1ProblemDefinitionAmulti-seriesf orecastingtaskisdefinedasfollows:givenas eriesofsequencedataD ={yi,1:Ti,x(p)i,1:Ti,x (f)i,Ti+1:Ti+H}Ni=1,whereTiisthelength ofeachse-quenceuntil forecastingstarts;Histheforecastinghor izonthatthemodelisre quiredtopredict;Nisthenumberofsequen cesinthedataset;yi,1:Tiandx(p)i,1:Ti arethesetsofobserved pasttargetsandfeatur eswhilex(f)i,Ti+1:Ti+Histhesetofknownfutu refeatures.Thetaskof timeseriesforecastin gistopredictthepossi blefuturevalueswitha modeltrainedonD:ˆyi,Ti+1:Ti+H=f(yi,1:Ti,xi,1:Ti+H;θ),(1)wherexi,1:Ti+H:=[x(p)i,1:Ti,x(f)i,Ti+1:Ti+H],θarethemodelparameter sthatareoptimizedwit htraininglossesLtrai n,andˆyi,Ti+1:Ti+Harethepredictedfu-t uretargetvalues.Depe ndingonthemodeltype, ˆyi,Ti+1:Ti+Hcanbedistri-butions [50]orscalarvalues[46]. Tobetterbeabletoexpl ainuniquedesignchoic efortimeseriesforeca sting,wefirstpresentaformalsta tementoftheforecasti ngproblemanddiscussc hallengesinevaluatin gforecastingpipeline sbeforedescribingthe componentsindetail.3 .1ProblemDefinitionAmulti-seriesf orecastingtaskisdefinedasfollows:givenas eriesofsequencedataD ={yi,1:Ti,x(p)i,1:Ti,x (f)i,Ti+1:Ti+H}Ni=1,whereTiisthelength ofeachse-quenceuntil forecastingstarts;Histheforecastinghor izonthatthemodelisre quiredtopredict;Nisthenumberofsequen cesinthedataset;yi,1:Tiandx(p)i,1:Ti arethesetsofobserved pasttargetsandfeatur eswhilex(f)i,Ti+1:Ti+Histhesetofknownfutu refeatures.Thetaskof timeseriesforecastin gistopredictthepossi blefuturevalueswitha modeltrainedonD:ˆyi,Ti+1:Ti+H=f(yi,1:Ti,xi,1:Ti+H;θ),(1)wherexi,1:Ti+H:=[x(p)i,1:Ti,x(f)i,Ti+1:Ti+H],θarethemodelparameter sthatareoptimizedwit htraininglossesLtrai n,andˆyi,Ti+1:Ti+Harethepredictedfu-t uretargetvalues.Depe ndingonthemodeltype, ˆyi,Ti+1:Ti+Hcanbedistri-butions [50]orscalarvalues[46]. 0.14
Finally,theforecasti ngqualityismeasuredb ythediscrepancybetwe enthepredictedtarget sˆyi,Ti+1:Ti+Handthegroundtruthfu turetargetsyi,Ti+1:Ti+HaccordingtoadefinedlossfunctionL.7Th emostcommonlyapplied metricsincludemeanab solutescalederror(MA SE),meanabsoluteperc entageerror(MAPE),sy mmetricmeanabsolutep ercentageerror(sMAPE )andmeanabsoluteerro r(MAE)[19,29,46].7Forthesakeofbrevit y,weomitthesequencei ndexiinthefollowingp artofthispaperunless statedotherwise. 最後に、予測された品質は、予測されたターゲットによって測定される:Ti+1:Ti+Handthegroundtruthfu turetargetsyi,Ti+1:Ti+Haccordingtoadefined lossfunctionL.7Themo stcommonlyappliedmet ricsincludemeanabsol utescalederror(MASE) ,meanabsolutepercent ageerror(MAPE),symme tricmeanabsoluteperc entageerror(sMAPE)an dmeanabsoluteerror(M AE)[19,29,46].7Forthesakeofbrevit y,weomitthethesequen ceindexiintheintheol lowingpartingpaperle ss。 0.23
英語(論文から抽出)日本語訳スコア
err 翻訳エラー 0.00
英語(論文から抽出)日本語訳スコア
EfficientAutomatedDeepLe arningforTimeSeriesF orecasting7EncoderDe coderauto-regressive ArchitectureClassFla tEncoderMLPMLPNoFeed ForwardNetworkN-BEAT SN-BEATSNoN-BEATS[46]Seq.EncoderRNN/Trans formerRNN/Transforme rYesSeq2Seq[9]NoTFT[38]MLPYesDeepAR[50]NoMQ-RNN[56]TCNMLPYesDeepAR[50]/WaveNet[45]NoMQ-CNN[56]Table1:Anoverviewoft hepossiblecombinatio nsanddesigndecisions ofthemodelsthatexist sinourconfigurationspace.Onlyth eTFTNetworkcontainst heoptionalcomponents presentedinFigure2a. stackedblocks[62]thatcanbedisentangle dtofitdifferentrequirements[57]. EfficientAutomatedDeepLe arningforTimeSeriesF orecasting7EncoderDe coderauto-regressive ArchitectureClassFla tEncoderMLPMLPNoFeed ForwardNetworkN-BEAT SN-BEATSNoN-BEATS[46]Seq.EncoderRNN/Trans formerRNN/Transforme rYesSeq2Seq[9]NoTFT[38]MLPYesDeepAR[50]NoMQ-RNN[56]TCNMLPYesDeepAR[50]/WaveNet[45]NoMQ-CNN[56]Table1:Anoverviewoft hepossiblecombinatio nsanddesigndecisions ofthemodelsthatexist sinourconfigurationspace.Onlyth eTFTNetworkcontainst heoptionalcomponents presentedinFigure2a. stackedblocks[62]thatcanbedisentangle dtofitdifferentrequirements[57]. 0.11
Forinstance,Seq2Seq[9],MQ-RNN[56]andDeepAR[50]allcontainanRNNnetwo rkastheirencoders.Th esemodelsnaturallysh arecommonaspectsandc annotbesimplytreated ascompletelydifferentmodels.Tofullyu tilizethere-lationsh ipsofdifferentmodels,wepropos eaconfigurationspacethatinc ludesallthepossiblec omponentsinaforecast ingnetwork.Asshownin Figure2a,mostexistin gforecastingarchitec turescanbedecomposed into3parts:encoder,d ecoderandforecasting heads:theencoderrece ivesthepasttargetval uesandoutputsanembed dinginthelatentspace .Thelatentembedding, togetherwiththeknown futurefeatures(ifapp licable),arefedtothe decodernetwork;theout-putofthedecod ernetworkisfinallypassedtothefore castingheadtogenerat easequenceofscalarva luesordistributions, dependingonthetypeof forecastinghead.Addi tionally,thevariable selection,temporalfu sionandskipconnectio nlayersintroducedbyT FT[38]canbeseamlesslyinteg ratedintoournetworks andaretreatedasoptio nalcomponents.Table1 listsallpossiblechoi cesofencoders,decode rs,andtheircorrespon dingarchitecturesino urconfigurationspace.Specifically,wedefinetwotypesofnetworkc omponents:sequential en-coder(Seq.Encoder )andflatencoder(FlatEncode r). Forinstance,Seq2Seq[9],MQ-RNN[56]andDeepAR[50]allcontainanRNNnetwo rkastheirencoders.Th esemodelsnaturallysh arecommonaspectsandc annotbesimplytreated ascompletelydifferentmodels.Tofullyu tilizethere-lationsh ipsofdifferentmodels,wepropos eaconfigurationspacethatinc ludesallthepossiblec omponentsinaforecast ingnetwork.Asshownin Figure2a,mostexistin gforecastingarchitec turescanbedecomposed into3parts:encoder,d ecoderandforecasting heads:theencoderrece ivesthepasttargetval uesandoutputsanembed dinginthelatentspace .Thelatentembedding, togetherwiththeknown futurefeatures(ifapp licable),arefedtothe decodernetwork;theout-putofthedecod ernetworkisfinallypassedtothefore castingheadtogenerat easequenceofscalarva luesordistributions, dependingonthetypeof forecastinghead.Addi tionally,thevariable selection,temporalfu sionandskipconnectio nlayersintroducedbyT FT[38]canbeseamlesslyinteg ratedintoournetworks andaretreatedasoptio nalcomponents.Table1 listsallpossiblechoi cesofencoders,decode rs,andtheircorrespon dingarchitecturesino urconfigurationspace.Specifically,wedefinetwotypesofnetworkc omponents:sequential en-coder(Seq.Encoder )andflatencoder(FlatEncode r). 0.08
Theformer(e g ,RNN,TransformerandT CN)directlyprocesses sequentialdataandout putanewse-quence;thelatter(e g ,MLPandN-BEATS)needs toflattenthesequentialda taintoa2Dmatrixtofus etheinformationfromd ifferenttimesteps.Throu ghthisconfigurationspace,Auto-P yTorch-TSisabletoenc ompassthe“convexhull”ofseveralstate-of-th e-artglobalforecasti ngmodelsandtunethem. AsshowninFigure2,giv enthepropertiesofenc oders,decoders,andmo delsthemselves,wecon structthreetypesofar chitecturesthatforec astthefuturetargetsi ndifferentways.Non-Auto-R egressivemodels(Figu re2b),includingMLP,M Q-RNN,MQ-CNN,N-BEATS andTFT,forecastthemu lti-horizontalpredic tionswithinonesingle step.Incontrast,Auto -Regressivemodelsdoo nlyone-stepforecasti ngwithineachforwardp ass.Thegeneratedfore castingvaluesarethen iterativelyfedtothen etworktoforecastthev alueatthenexttimeste p.Alltheauto-regress ivemodelsaretrainedw ithteacherforcing[22]. Theformer(e g ,RNN,TransformerandT CN)directlyprocesses sequentialdataandout putanewse-quence;thelatter(e g ,MLPandN-BEATS)needs toflattenthesequentialda taintoa2Dmatrixtofus etheinformationfromd ifferenttimesteps.Throu ghthisconfigurationspace,Auto-P yTorch-TSisabletoenc ompassthe“convexhull”ofseveralstate-of-th e-artglobalforecasti ngmodelsandtunethem. AsshowninFigure2,giv enthepropertiesofenc oders,decoders,andmo delsthemselves,wecon structthreetypesofar chitecturesthatforec astthefuturetargetsi ndifferentways.Non-Auto-R egressivemodels(Figu re2b),includingMLP,M Q-RNN,MQ-CNN,N-BEATS andTFT,forecastthemu lti-horizontalpredic tionswithinonesingle step.Incontrast,Auto -Regressivemodelsdoo nlyone-stepforecasti ngwithineachforwardp ass.Thegeneratedfore castingvaluesarethen iterativelyfedtothen etworktoforecastthev alueatthenexttimeste p.Alltheauto-regress ivemodelsaretrainedw ithteacherforcing[22]. 0.10
Onlysequentialnetwor kscouldserveasanenco derinauto-regressive models,however,wecou ld onlysequentialnetwor kscouldserveasanenco derin-regressivemode ls, しかし、wecould 0.12
英語(論文から抽出)日本語訳スコア
8D.Dengetal.selectbo thsequentialandflatdecodersforauto-re gressivemodels.Seque ntialdecodersarecapa bleofindependentlyre ceivingthenewlygener atedpredictions.Weco nsiderthisclassofarc hitecturesasaSeq2Seq [9]model:wefirstfeedthepastinputv aluestotheencodertog enerateitsoutputhxan dthenpasshxtothedeco der,asshowninFigure2 c.Havingacquiredhx,t hedecoderthengenerat esasequenceofpredict ionswiththegenerated predictionsandknownf uturevaluesbyitself. Finally,Auto-Regress iveModelswithflatdecodersareclassifiedasthefamilyofDeepA Rmodels[50]. 8D.Dengetal.selectbo thsequentialandflatdecodersforauto-re gressivemodels.Seque ntialdecodersarecapa bleofindependentlyre ceivingthenewlygener atedpredictions.Weco nsiderthisclassofarc hitecturesasaSeq2Seq [9]model:wefirstfeedthepastinputv aluestotheencodertog enerateitsoutputhxan dthenpasshxtothedeco der,asshowninFigure2 c.Havingacquiredhx,t hedecoderthengenerat esasequenceofpredict ionswiththegenerated predictionsandknownf uturevaluesbyitself. Finally,Auto-Regress iveModelswithflatdecodersareclassifiedasthefamilyofDeepA Rmodels[50]. 0.04
Asthedecodercouldnot collectmoreinformati onasthenumberofgener atedsamplesincreases ,weneedtofeedthegene ratedsamplesbacktoth eencoder,asshowninFi gure2d.Besidesitsarc hitectures,hyperpare mtersalsoplayanimpor tantroleontheperform anceofadeepneuralnet work,forthedetailsof otherhyperparameters inourconfigu-rationspace,weref ertotheAppendix.3.4H yperparameterOptimiz ationWeoptimizethelo ssonthevalidationset LDvalwithBO[17,25]. Asthedecodercouldnot collectmoreinformati onasthenumberof generatedsamplesincr eases,weneedtofeedth e generatedsamplesback totheencoder,asshown inFigure2d.Besidesit sarchitectures,hyper paremters alsoplayanimportantr oleonthe Performanceofadeepne uralnetwork,forthede tailsofotherhyperpar ametersinourconfigu- rationspace,werefert otheAppendix.3.4Hype rparameterOptimizati on Weoptimizethelossont hevalidationsetLDval withBO[17,25]。 0.07
Itisknownforitssampl eefficiencywhichmakesitan idealcandidateforexp ensiveblack-boxfunct ionoptimizationtasks ,suchasAutoDLforexpe nsiveglobalforecasti ngDLmodels.Specifically,weoptimizetheh yperparameterswithSM AC[25]8thatconstructsarand omforecast(RF)tomode lthelossdistribution overtheconfigurationspace.Simila rtootherAutoMLtools[18,61]forsupervisedclas-si fication,weutilizemult i-fidelityoptimizationto achievebetterany-tim eper-formance.Multi- fidelityoptimizersstar twiththelowestbudget andgraduallyassignhi gherbudgetstowell-pe rformingconfigurations.Thereby,th echoiceofwhatbudgett ypetouseisessentialf ortheefficiencyofamulti-fidelityopti-mizer.The mostpopularchoicesof budgettypeinDLtasksa rethenumberofepochsa nddatasetsize.Fortim eseriesforecasting,w eproposethefollowing fourdifferenttypesofbudget:–NumberofEpochs(#Epoc hs)–SeriesResolution(Res olution)–NumberofSeries(#Seri es)–NumberofSamplesineac hSeries(#SMPsperSer. )AhigherResolutionin dicatesanextendedsam pleinterval.Thesampl ein-tervaliscomputed bytheinverseofthefidelityvalue,e g ,aresolutionfidelityof0.1indicates foreachserieswetakee verytenthpoint:weshr inkthesizeoftheslidi ngwindowaccordinglyt oensurethatthelowerfidelityoptimizerdoesn otreceivemoreinforma tionthanthehigherfidelityoptimizer. Itisknownforitssampl eefficiencywhichmakesitan idealcandidateforexp ensiveblack-boxfunct ionoptimizationtasks ,suchasAutoDLforexpe nsiveglobalforecasti ngDLmodels.Specifically,weoptimizetheh yperparameterswithSM AC[25]8thatconstructsarand omforecast(RF)tomode lthelossdistribution overtheconfigurationspace.Simila rtootherAutoMLtools[18,61]forsupervisedclas-si fication,weutilizemult i-fidelityoptimizationto achievebetterany-tim eper-formance.Multi- fidelityoptimizersstar twiththelowestbudget andgraduallyassignhi gherbudgetstowell-pe rformingconfigurations.Thereby,th echoiceofwhatbudgett ypetouseisessentialf ortheefficiencyofamulti-fidelityopti-mizer.The mostpopularchoicesof budgettypeinDLtasksa rethenumberofepochsa nddatasetsize.Fortim eseriesforecasting,w eproposethefollowing fourdifferenttypesofbudget:–NumberofEpochs(#Epoc hs)–SeriesResolution(Res olution)–NumberofSeries(#Seri es)–NumberofSamplesineac hSeries(#SMPsperSer. )AhigherResolutionin dicatesanextendedsam pleinterval.Thesampl ein-tervaliscomputed bytheinverseofthefidelityvalue,e g ,aresolutionfidelityof0.1indicates foreachserieswetakee verytenthpoint:weshr inkthesizeoftheslidi ngwindowaccordinglyt oensurethatthelowerfidelityoptimizerdoesn otreceivemoreinforma tionthanthehigherfidelityoptimizer. 0.07
#Seriesmeansthatweon lysampleafractionofs equencestotrainourmo del.Finally,#SMPsper Ser.indicatesthatwed ecreasetheexpectedva lueofthenumberofsam- pleswithineachsequen ce;seeSection3.2forsamp le-generationmethod. Nexttothesemulti-fidelityvariants,weals oconsidervanillaBaye sianoptimization(Van illaBO)usingthemaxim umofallthesefidelities.8WeusedSMAC 3[39]fromhttps://github.c om/automl/SMAC3 Nexttothesemulti-fid elityvariants,we alsoconsidervanillaB ayesianoptimization( VanillaBO)usingthema ximumofallthesefidel ities.8WeusedSMAC3[39] fromhttps://github.c om/automl/SMAC3.com 0.14
英語(論文から抽出)日本語訳スコア
EfficientAutomatedDeepLe arningforTimeSeriesF orecasting93.5Proxy- EvaluationonManyTime SeriesAlltrainedmode lsmustqueryeveryseri estoevaluateLval.How ever,thenum-berofser iescouldbequitelarge .Additionally,manyfo recastingmodels(e g ,DeepAR)arecheaptobe trainedbutexpensived uringinferencetime.A saresult,ratherthant rainingtime,inferenc etimeismorelikelytob ecomeabot-tlenecktoo ptimizethehyperparam etersonalargedataset (forinstance,with10k seriesormore),wherec onfigurationwithlowerfidelitieswouldnolonge rprovidethedesirable speed-upwhenusingthe fullvalidationset.Th ereby,weconsideradifferentevaluationstrat egyonlargedatasets(w ithmorethan1kseries) andlowerbudgets:weas kthemodeltoonlyevalu ateafractionoftheval idationset(wecallthi sfraction“proxyvalidationset”)whiletheotherseries arepredictedbyadummy forecaster(whichsimp lyrepeatsthelasttarg etvalueinthetraining series,i.e.,yT,Htime s). EfficientAutomatedDeepLe arningforTimeSeriesF orecasting93.5Proxy- EvaluationonManyTime SeriesAlltrainedmode lsmustqueryeveryseri estoevaluateLval.How ever,thenum-berofser iescouldbequitelarge .Additionally,manyfo recastingmodels(e g ,DeepAR)arecheaptobe trainedbutexpensived uringinferencetime.A saresult,ratherthant rainingtime,inferenc etimeismorelikelytob ecomeabot-tlenecktoo ptimizethehyperparam etersonalargedataset (forinstance,with10k seriesormore),wherec onfigurationwithlowerfidelitieswouldnolonge rprovidethedesirable speed-upwhenusingthe fullvalidationset.Th ereby,weconsideradifferentevaluationstrat egyonlargedatasets(w ithmorethan1kseries) andlowerbudgets:weas kthemodeltoonlyevalu ateafractionoftheval idationset(wecallthi sfraction“proxyvalidationset”)whiletheotherseries arepredictedbyadummy forecaster(whichsimp lyrepeatsthelasttarg etvalueinthetraining series,i.e.,yT,Htime s). 0.08
Thesizeoftheproxyval idationsetisproporti onaltothebudgetalloc atedtotheconfiguration:maximalbudg etindicatesthatthemo delneedstoevaluateth eentirevalidationset .Wesettheminimalnumb erofseriesintheproxy settobe1ktoensuretha titcontainsenoughinf ormationfromthevalid ationset.Theproxyval idationsetisgener-at edwithagridtoensuret hatalltheconfigurationsunderthesam efidelityareevaluatedon thesameproxyset.4Exp erimentsWeevaluateAu to-PyTorch-TSonthees tablishedbenchmarkso ftheMonashTimeSeries ForecastingRepositor y[20]9.Thisrepositorycont ainsvariousdatasetst hatcomefromdifferentdomains,whichal lowsustoassesstherob ustnessofourframewor kagainstdifferentdatadistributio ns.Additionally,itre cordstheperformanceo fseveralmodels,inclu dinglocalmodels[3,7,11,26,27],globaltra-ditionalm achinelearningmodels [48,54],andglobalDLmodels[2,6,46,50,55]onDtest,see[20]fordetails.Forevalua tingAuto-PyTorch-TS, wewillfol-lowtheexac tsameprotocolanddata setsplits.Wefocusour comparisonofAuto-PyT orch-TSagainsttwotyp esofbaselines: Thesizeoftheproxyval idationsetisproporti onaltothebudgetalloc atedtotheconfiguration:maximalbudg etindicatesthatthemo delneedstoevaluateth eentirevalidationset .Wesettheminimalnumb erofseriesintheproxy settobe1ktoensuretha titcontainsenoughinf ormationfromthevalid ationset.Theproxyval idationsetisgener-at edwithagridtoensuret hatalltheconfigurationsunderthesam efidelityareevaluatedon thesameproxyset.4Exp erimentsWeevaluateAu to-PyTorch-TSonthees tablishedbenchmarkso ftheMonashTimeSeries ForecastingRepositor y[20]9.Thisrepositorycont ainsvariousdatasetst hatcomefromdifferentdomains,whichal lowsustoassesstherob ustnessofourframewor kagainstdifferentdatadistributio ns.Additionally,itre cordstheperformanceo fseveralmodels,inclu dinglocalmodels[3,7,11,26,27],globaltra-ditionalm achinelearningmodels [48,54],andglobalDLmodels[2,6,46,50,55]onDtest,see[20]fordetails.Forevalua tingAuto-PyTorch-TS, wewillfol-lowtheexac tsameprotocolanddata setsplits.Wefocusour comparisonofAuto-PyT orch-TSagainsttwotyp esofbaselines: 0.05
(i)theoverallsingleb estbase-linefrom[20],assumingauserwouldh avetherequiredexpert knowledgeand (i)theoverallsingleb estbase-line from[20],assumingauserwouldh avequiredexpertknowl edgeand 0.17
(ii)thebestdataset-s pecificbaseline.Wenotethat thelatterisaverystro ngbaselineandapriori itisnotknownwhichbas elinewouldbebestfora givendataset;thuswecallitthetheor eticaloraclebaseline .SincetheMonashTimeS eriesForecastingRepo sitorydoesnotrecordt hestandarddeviationo feachmethod,wererant hosebaselinesonourcl usterfor5times.Compa redtotherepository,o urconfigurationspaceinclude sonemorestrongclasso falgorithms,TFT[38],whichweaddedtoourse tofbaselinestoensure afairandevenharderco mparison.Wesetupourt askfollowingthemetho ddescribedinSection3 .2:HPOisonlyexecuted onDtrain/valwhileHis givenbytheoriginalre pository.Asdescribed inSection3.2,wecreat eanensemblewithsize2 0thatcollectsmultipl emodelsduringthecour seofoptimization.Whe nthesearchfinishes,werefitthe9https://forecas tingdata.org/ (ii)thebestdataset-s pecificbaseline.Wenotethat thelatterisaverystro ngbaselineandapriori itisnotknownwhichbas elinewouldbebestfora givendataset;thuswecallitthetheor eticaloraclebaseline .SincetheMonashTimeS eriesForecastingRepo sitorydoesnotrecordt hestandarddeviationo feachmethod,wererant hosebaselinesonourcl usterfor5times.Compa redtotherepository,o urconfigurationspaceinclude sonemorestrongclasso falgorithms,TFT[38],whichweaddedtoourse tofbaselinestoensure afairandevenharderco mparison.Wesetupourt askfollowingthemetho ddescribedinSection3 .2:HPOisonlyexecuted onDtrain/valwhileHis givenbytheoriginalre pository.Asdescribed inSection3.2,wecreat eanensemblewithsize2 0thatcollectsmultipl emodelsduringthecour seofoptimization.Whe nthesearchfinishes,werefitthe9https://forecas tingdata.org/ 0.04
英語(論文から抽出)日本語訳スコア
10D.Dengetal.ensembl etotheunionofDtrain/ valandevaluatetherefittedmodelonDtest.Bot hLvalandLtestaremeas uredwiththemeanvalue ofMASE[29]acrossalltheseriesin thedataset.Toleverag eavailableexpertknow ledge,Auto-PyTorch-T Srunsaninitialdesign withthedefaultconfigurationsofeachmodel inTable1.Pleasenotet hatthisinitialdesign willbeevaluatedonthe smallestavailablefidelity.Allmulti-fidelityvariantsofAuto -PyTorch-TSstartwith thecheapestfidelityof1/9,usethen1 /3andendwiththehighe stfidelity(1.0). 10d.dengetal.ensembl etotheunionofdtrain/ valandevaluatetheref ittedmodelondtest.bo thlvalandltestaremsu redwiththemeanvalueo fmase[29]acrossalltheseriesin thedataset.toleverag eavailableexpertknow ledge,auto-pytorch-t srunsaninitialdesign withthedefaultconfig urationsofeachmodeli ntable1.pleasenoteth atthisinitialdesignw illbeevaluatedonsmal lestfidelity.all multifidelitysofauto -pytorch-tstartwitht hecheapestfidelityof 1/9,usethen1/3andend withthehighestfideli ty(1.0) 0.04
TherunsofAuto-PyTorc h-TSarerepeated5time swithdifferentrandomseeds.Wer anallthedatasetsonac lusternodeequippedwi th8IntelXeonGold6254 @3.10GHzCPUcoresando neNVIDIAGTX2080TIGPU equippedwithPyTorch1 .10andCuda11.6.Thehy perparameterswereopt imizedwithSMAC3v1.0. 1for10hours,andthenw erefittheensembleonDtrain /valandevaluateitont hetestset.Allthejobs werefinishedwithin12hours. 4.1TimeSeriesForecas tingAuto-PyTorch-TS# EpochsResolution#Ser ies#SMPsperSer.Vanil laBOBestdataset-spec ificbaselineOverallsing lebestbaselineM3Year ly2.73(0.10)2.66(0.0 5)2.76(0.09)2.64(0.0 9)2.68(0.08)2.77(0.0 0)3.13(0.00)M3Quarte rly1.08(0.01)1.10(0. 01)1.10(0.01)1.09(0. 02)1.12(0.03)1.12(0. 00)1.26(0.00)M3Month ly0.85(0.01)0.89(0.0 2)0.86(0.01)0.87(0.0 4)0.86(0.02)0.86(0.0 0)0.86(0.00)M3Other1 .90(0.07)1.82(0.03)1 .98(0.13)1.92(0.05)1 .95(0.15)1.81(0.00)1 .85(0.00)M4Quarterly 1.15(0.01)1.13(0.01) 1.13(0.01)1.15(0.01) 1.15(0.02)1.16(0.00) 1.19(0.00)M4Monthly0 .93(0.02)0.93(0.02)0 .93(0.02)0.93(0.02)0 .96(0.02)0.95(0.00)1 .05(0.00)M4Weekly0.4 4(0.01)0.45(0.02)0.4 3(0.02)0.44(0.02)0.4 5(0.01)0.48(0.00)0.5 0(0.00)M4Daily1.14(0 .01)1.18(0.07)1.16(0 .06)1.14(0.04)1.38(0 .41)1.13(0.02)1.16(0 .00)M4Hourly0.86(0.1 2)0.95(0.11)0.78(0.0 7)0.85(0.07)0.85(0.0 6)1.66(0.00)2.66(0.0 0)M4Yearly3.05(0.03) 3.08(0.04)3.05(0.01) 3.09(0.04)3.10(0.02) 3.38(0.00)3.44(0.00) TourismQuarterly1.61 (0.03)1.57(0.05)1.59 (0.05)1.59(0.02)1.55 (0.03)1.50(0.01)1.83 (0.00)TourismMonthly 1.42(0.03)1.44(0.03) 1.45(0.04)1.47(0.02) 1.42(0.02)1.44(0.02) 1.75(0.00)Dominick0. 51(0.04)0.49(0.00)0. 49(0.01)0.49(0.01)0. 49(0.01)0.51(0.00)0. 72(0.00)KddCup1.20(0 .02)1.18(0.02)1.18(0 .03)1.18(0.03)1.20(0 .03)1.17(0.01)1.39(0 .00)Weather0.63(0.08 )0.58(0.04)0.59(0.02 )0.59(0.06)0.57(0.00 )0.64(0.01)0.69(0.00 )NN5Daily0.79(0.01)0 .80(0.01)0.81(0.04)0 .78(0.01)0.79(0.01)0 .86(0.00)0.86(0.00)N N5Weekly0.76(0.01)0. 76(0.03)0.76(0.01)0. 77(0.01)0.76(0.01)0. 77(0.01)0.87(0.00)Ho spital0.76(0.01)0.76 (0.00)0.76(0.00)0.75 (0.01)0.75(0.01)0.76 (0.00)0.77(0.00)TrafficWeekly1.04(0.07)1.1 0(0.03)1.04(0.05)1.0 8(0.09)1.03(0.07)0.9 9(0.03)1.15(0.00)Ele ctricityWeekly0.78(0 .04)1.06(0.13)0.80(0 .04)0.74(0.07)0.85(0 .11)0.76(0.01)0.79(0 .00)ElectricityHourl y1.52(0.05)1.54(0.00 )1.58(0.08)1.54(0.06 )1.51(0.05)1.60(0.02 )3.69(0.00)KaggleWeb TrafficWeekly0.56(0.01)0.5 6(0.01)0.55(0.00)0.5 7(0.01)0.59(0.01)0.6 1(0.02)0.62(0.00)Cov idDeaths5.11(1.60)4. 54(0.05)4.43(0.13)4. 58(0.30)4.53(0.24)5. 16(0.04)5.72(0.00)Te mperatureRain0.76(0. 05)0.75(0.01)0.73(0. 02)0.73(0.03)0.71(0. 04)0.71(0.03)1.23(0. 00)∅Rel.ImprBest0.820.83 0.810.810.820.861.0∅Rel.ImprOracle0.960. 970.950.950.961.01.1 7Table2:Wecomparevar iantsofAuto-PyTorch- TSagainstthesinglebe stbase-line(TBATS)an datheoreticallyoptim aloracleofchoosingth ecorrectbaselinefore achdatasetwrtmeanMAS Eerrorsonthetestsets .Weshowthemeanandsta ndarddeviationforeac hdataset.Thebestresu ltsarehighlightedinb old-face.Wecomputedt herelativeimprovemen twrttheOracleBaselin eoneachdatasetanduse dthegeometricaverage foraggregationoverth edatasets. TherunsofAuto-PyTorc h-TSarerepeated5time swithdifferentrandomseeds.Wer anallthedatasetsonac lusternodeequippedwi th8IntelXeonGold6254 @3.10GHzCPUcoresando neNVIDIAGTX2080TIGPU equippedwithPyTorch1 .10andCuda11.6.Thehy perparameterswereopt imizedwithSMAC3v1.0. 1for10hours,andthenw erefittheensembleonDtrain /valandevaluateitont hetestset.Allthejobs werefinishedwithin12hours. 4.1TimeSeriesForecas tingAuto-PyTorch-TS# EpochsResolution#Ser ies#SMPsperSer.Vanil laBOBestdataset-spec ificbaselineOverallsing lebestbaselineM3Year ly2.73(0.10)2.66(0.0 5)2.76(0.09)2.64(0.0 9)2.68(0.08)2.77(0.0 0)3.13(0.00)M3Quarte rly1.08(0.01)1.10(0. 01)1.10(0.01)1.09(0. 02)1.12(0.03)1.12(0. 00)1.26(0.00)M3Month ly0.85(0.01)0.89(0.0 2)0.86(0.01)0.87(0.0 4)0.86(0.02)0.86(0.0 0)0.86(0.00)M3Other1 .90(0.07)1.82(0.03)1 .98(0.13)1.92(0.05)1 .95(0.15)1.81(0.00)1 .85(0.00)M4Quarterly 1.15(0.01)1.13(0.01) 1.13(0.01)1.15(0.01) 1.15(0.02)1.16(0.00) 1.19(0.00)M4Monthly0 .93(0.02)0.93(0.02)0 .93(0.02)0.93(0.02)0 .96(0.02)0.95(0.00)1 .05(0.00)M4Weekly0.4 4(0.01)0.45(0.02)0.4 3(0.02)0.44(0.02)0.4 5(0.01)0.48(0.00)0.5 0(0.00)M4Daily1.14(0 .01)1.18(0.07)1.16(0 .06)1.14(0.04)1.38(0 .41)1.13(0.02)1.16(0 .00)M4Hourly0.86(0.1 2)0.95(0.11)0.78(0.0 7)0.85(0.07)0.85(0.0 6)1.66(0.00)2.66(0.0 0)M4Yearly3.05(0.03) 3.08(0.04)3.05(0.01) 3.09(0.04)3.10(0.02) 3.38(0.00)3.44(0.00) TourismQuarterly1.61 (0.03)1.57(0.05)1.59 (0.05)1.59(0.02)1.55 (0.03)1.50(0.01)1.83 (0.00)TourismMonthly 1.42(0.03)1.44(0.03) 1.45(0.04)1.47(0.02) 1.42(0.02)1.44(0.02) 1.75(0.00)Dominick0. 51(0.04)0.49(0.00)0. 49(0.01)0.49(0.01)0. 49(0.01)0.51(0.00)0. 72(0.00)KddCup1.20(0 .02)1.18(0.02)1.18(0 .03)1.18(0.03)1.20(0 .03)1.17(0.01)1.39(0 .00)Weather0.63(0.08 )0.58(0.04)0.59(0.02 )0.59(0.06)0.57(0.00 )0.64(0.01)0.69(0.00 )NN5Daily0.79(0.01)0 .80(0.01)0.81(0.04)0 .78(0.01)0.79(0.01)0 .86(0.00)0.86(0.00)N N5Weekly0.76(0.01)0. 76(0.03)0.76(0.01)0. 77(0.01)0.76(0.01)0. 77(0.01)0.87(0.00)Ho spital0.76(0.01)0.76 (0.00)0.76(0.00)0.75 (0.01)0.75(0.01)0.76 (0.00)0.77(0.00)TrafficWeekly1.04(0.07)1.1 0(0.03)1.04(0.05)1.0 8(0.09)1.03(0.07)0.9 9(0.03)1.15(0.00)Ele ctricityWeekly0.78(0 .04)1.06(0.13)0.80(0 .04)0.74(0.07)0.85(0 .11)0.76(0.01)0.79(0 .00)ElectricityHourl y1.52(0.05)1.54(0.00 )1.58(0.08)1.54(0.06 )1.51(0.05)1.60(0.02 )3.69(0.00)KaggleWeb TrafficWeekly0.56(0.01)0.5 6(0.01)0.55(0.00)0.5 7(0.01)0.59(0.01)0.6 1(0.02)0.62(0.00)Cov idDeaths5.11(1.60)4. 54(0.05)4.43(0.13)4. 58(0.30)4.53(0.24)5. 16(0.04)5.72(0.00)Te mperatureRain0.76(0. 05)0.75(0.01)0.73(0. 02)0.73(0.03)0.71(0. 04)0.71(0.03)1.23(0. 00)∅Rel.ImprBest0.820.83 0.810.810.820.861.0∅Rel.ImprOracle0.960. 970.950.950.961.01.1 7Table2:Wecomparevar iantsofAuto-PyTorch- TSagainstthesinglebe stbase-line(TBATS)an datheoreticallyoptim aloracleofchoosingth ecorrectbaselinefore achdatasetwrtmeanMAS Eerrorsonthetestsets .Weshowthemeanandsta ndarddeviationforeac hdataset.Thebestresu ltsarehighlightedinb old-face.Wecomputedt herelativeimprovemen twrttheOracleBaselin eoneachdatasetanduse dthegeometricaverage foraggregationoverth edatasets. 0.23
英語(論文から抽出)日本語訳スコア
EfficientAutomatedDeepLe arningforTimeSeriesF orecasting11Table2sh owshowdifferentvariantsofAuto- PyTorch-TSperformaga instthetwotypesofbas elinesacrossmultiple datasets.Evenusingth etheoreticaloracleba selineforcomparison, Auto-PyTorch-TSisabl etooutperformiton18o utof24datasets.Onthe other6datasets,itach ievednearlythesamepe rformanceasthebaseli nes.Onaverage,wewere abletoreducetheMASEb yupto5%againsttheora cleandbyupto19%again stthesinglebestbasel ine,establishinganew robuststate-of-the-a rtoverall.Surprising ly,theforecasting-sp ecificbudgettypesdidnotpe rformsignifi-cantlybetterthanthe numberofepochs(theco mmonbudgettypeinclas sifi-cation). EfficientAutomatedDeepLe arningforTimeSeriesF orecasting11Table2sh owshowdifferentvariantsofAuto- PyTorch-TSperformaga instthetwotypesofbas elinesacrossmultiple datasets.Evenusingth etheoreticaloracleba selineforcomparison, Auto-PyTorch-TSisabl etooutperformiton18o utof24datasets.Onthe other6datasets,itach ievednearlythesamepe rformanceasthebaseli nes.Onaverage,wewere abletoreducetheMASEb yupto5%againsttheora cleandbyupto19%again stthesinglebestbasel ine,establishinganew robuststate-of-the-a rtoverall.Surprising ly,theforecasting-sp ecificbudgettypesdidnotpe rformsignifi-cantlybetterthanthe numberofepochs(theco mmonbudgettypeinclas sifi-cation). 0.06
Nevertheless,theopti malchoiceofbudgettyp evariesacrossdataset s,whichalignswithour intuitionthatonagive ndatasetthecorrelati onbetweenlowerandhig herfidelitiesmaybestronge rforcertainbudgettyp esthanforothertypes. Ifweweretoconstructa theoreticallyoptimal budget-typeselec-tor ,whichutilizesthebes t-performingbudgetty peforagivendataset,w ewouldreducetherelat iveerrorby2%overthes inglebest(i.e.,#SMPs perSer.).4.2Hyperpar ameterImportanceAlth oughHPOisoftenconsid eredasablack-boxopti mizationproblem[17],itisimportanttoshed lightontheimportance ofdifferenthyperparameters toprovideinsightsint othedesignchoiceofDL modelsandtoindicateh owtodesignthenextgen erationofAutoDLsyste ms.Hereweevaluatethe importanceofthehyper parameterswithagloba lanalysisbasedonfANO VA[24],whichmeasurestheimp ortanceofhyperparame tersbythevariancecau sedbychangingonesing lehyperparameterwhil emarginalizingoverth eeffectofallotherhyperpa rameters.Resultsonin dividualdatasetscanb efoundinappendix.Fig .3:Hyperparameterimp ortancewithfANOVAacr ossalldatasetsofTabl e2Foreachofthe10most importanthyperparame tersinourconfigurationspace(ofmore than200dimensions),F igure3showsaboxploto ftheimpor- Nevertheless,theopti malchoiceofbudgettyp evariesacrossdataset s,whichalignswithour intuitionthatonagive ndatasetthecorrelati onbetweenlowerandhig herfidelitiesmaybestronge rforcertainbudgettyp esthanforothertypes. Ifweweretoconstructa theoreticallyoptimal budget-typeselec-tor ,whichutilizesthebes t-performingbudgetty peforagivendataset,w ewouldreducetherelat iveerrorby2%overthes inglebest(i.e.,#SMPs perSer.).4.2Hyperpar ameterImportanceAlth oughHPOisoftenconsid eredasablack-boxopti mizationproblem[17],itisimportanttoshed lightontheimportance ofdifferenthyperparameters toprovideinsightsint othedesignchoiceofDL modelsandtoindicateh owtodesignthenextgen erationofAutoDLsyste ms.Hereweevaluatethe importanceofthehyper parameterswithagloba lanalysisbasedonfANO VA[24],whichmeasurestheimp ortanceofhyperparame tersbythevariancecau sedbychangingonesing lehyperparameterwhil emarginalizingoverth eeffectofallotherhyperpa rameters.Resultsonin dividualdatasetscanb efoundinappendix.Fig .3:Hyperparameterimp ortancewithfANOVAacr ossalldatasetsofTabl e2Foreachofthe10most importanthyperparame tersinourconfigurationspace(ofmore than200dimensions),F igure3showsaboxploto ftheimpor- 0.04
英語(論文から抽出)日本語訳スコア
12D.Dengetal.tanceac rossourdatasets.Them ostimportanthyperpar ametersarecloselyass o-ciatedwiththetrain ingprocedure:3ofthem controltheoptimizero ftheneuralnetworkand itslearningrate.Addi tionally,4hyperparam eters(windowsize,num batchesperepoch,batc hsize,targetscaler)c ontributetothesample randdatapreprocessin g,showingtheimportan ceofthedatafedtothen etwork.Finally,thefa ctthattwohyperparame terscontrollingtheda tadistributionareamo ngstthemostimportant onesindicatesthatide ntifyingthecorrectpo tentialdatadistribut ionmightbebeneficialtotheperformance ofthemodel.4.3Ablati onStudyInSection3.5, weproposetopartially evaluatethevalidatio nsetonlargerdatasets tofurtheracceleratet heoptimizationproces s.Tostudytheefficiencygainofthisappr oach,wecompareevalua tiononthefullvalidat ionsetvstheproxy-eva luationonpartsofthev alidationset.Weranth isablationstudyonthe largestdatasets,name ly”KaggleWebTrafficWeekly”(145063series),”M4Monthly”(48000series)and”Dominick”(115704series). 12D.Dengetal.tanceac rossourdatasets.Them ostimportanthyperpar ametersarecloselyass o-ciatedwiththetrain ingprocedure:3ofthem controltheoptimizero ftheneuralnetworkand itslearningrate.Addi tionally,4hyperparam eters(windowsize,num batchesperepoch,batc hsize,targetscaler)c ontributetothesample randdatapreprocessin g,showingtheimportan ceofthedatafedtothen etwork.Finally,thefa ctthattwohyperparame terscontrollingtheda tadistributionareamo ngstthemostimportant onesindicatesthatide ntifyingthecorrectpo tentialdatadistribut ionmightbebeneficialtotheperformance ofthemodel.4.3Ablati onStudyInSection3.5, weproposetopartially evaluatethevalidatio nsetonlargerdatasets tofurtheracceleratet heoptimizationproces s.Tostudytheefficiencygainofthisappr oach,wecompareevalua tiononthefullvalidat ionsetvstheproxy-eva luationonpartsofthev alidationset.Weranth isablationstudyonthe largestdatasets,name ly”KaggleWebTrafficWeekly”(145063series),”M4Monthly”(48000series)and”Dominick”(115704series). 0.06
Fig.4:Validationloss esovertimewithdifferentmulti-fidelityapproaches.Wec omputetheareaun-dert hecurve(AUC)ofourapp roach(PE)andnaivemul ti-fidelityoptimizer(FE)a ndlisttheminthefig-ures.Figure4showst heresults.Ittakesmuc hlesstimeforouropti- mizer(blue)tofinishthefirstconfigurationeval-uations onthelowestfidelity,improvingeffi-ciencyearlyon.Incon -trast,avanillamulti -fidelityoptimizer(oran ge)withthefullvalida tionsettakesnearlyth esameamountoftimeasa vanillaBO(green)tofinishthefirstevalua-tion,showi ngtheneedofefficientvalidationandno tonlytraining.Wenote thatthefinalper-formancedoesn otchangesubstantiall ybetweenthedifferentmethods.Over-al l,Auto-PyTorch-TSach ievesthebestany-time performance.Wenoteth atAuto-PyTorch-TShas notcon-vergedafter10 handwillmostlikelyac hieveevenbetterperfo rmanceifprovidedwith morecomputeresources .Theresultsontheothe rdatasetsshowasimila rtrendandcanbefoundi ntheappendix. Fig.4:Validationloss esovertimewithdifferentmulti-fidelityapproaches.Wec omputetheareaun-dert hecurve(AUC)ofourapp roach(PE)andnaivemul ti-fidelityoptimizer(FE)a ndlisttheminthefig-ures.Figure4showst heresults.Ittakesmuc hlesstimeforouropti- mizer(blue)tofinishthefirstconfigurationeval-uations onthelowestfidelity,improvingeffi-ciencyearlyon.Incon -trast,avanillamulti -fidelityoptimizer(oran ge)withthefullvalida tionsettakesnearlyth esameamountoftimeasa vanillaBO(green)tofinishthefirstevalua-tion,showi ngtheneedofefficientvalidationandno tonlytraining.Wenote thatthefinalper-formancedoesn otchangesubstantiall ybetweenthedifferentmethods.Over-al l,Auto-PyTorch-TSach ievesthebestany-time performance.Wenoteth atAuto-PyTorch-TShas notcon-vergedafter10 handwillmostlikelyac hieveevenbetterperfo rmanceifprovidedwith morecomputeresources .Theresultsontheothe rdatasetsshowasimila rtrendandcanbefoundi ntheappendix. 0.07
英語(論文から抽出)日本語訳スコア
EfficientAutomatedDeepLe arningforTimeSeriesF orecasting135Conclus ionandFutureWorkInth iswork,weintroducedA uto-PyTorch-TS,anAut oDLframeworkforthejo intoptimizationofarc hitectureandhyperpar ametersofDLmodelsfor timeseriesforecastin gtasks.Tothisend,wep roposeanewflexibleconfigurationspaceencompa ssingseveralstate-of -the-artforecastingD Lmodelsbyidenti-fyin gkeyconceptsindifferentmodelclassesand combiningthemintoasi ngleframework.Givent heflexibilityofourconfigurationspace,newdev eloperscouldeasilyad apttheirarchitecture stoourframeworkunder theassumptionthatthe ycanbeformulatedasan encoder-decoder-head architecture.Despite recentadvancesandcom petitiveresults,DLme thodshaveuntilnownot beenconsid-eredtheun disputedbestapproach intimeseriesforecast ingtasks:Traditional machinelearningappro achesandstatisticalm ethodshaveremainedqu itecom-petitive[20,40]. EfficientAutomatedDeepLe arningforTimeSeriesF orecasting135Conclus ionandFutureWorkInth iswork,weintroducedA uto-PyTorch-TS,anAut oDLframeworkforthejo intoptimizationofarc hitectureandhyperpar ametersofDLmodelsfor timeseriesforecastin gtasks.Tothisend,wep roposeanewflexibleconfigurationspaceencompa ssingseveralstate-of -the-artforecastingD Lmodelsbyidenti-fyin gkeyconceptsindifferentmodelclassesand combiningthemintoasi ngleframework.Givent heflexibilityofourconfigurationspace,newdev eloperscouldeasilyad apttheirarchitecture stoourframeworkunder theassumptionthatthe ycanbeformulatedasan encoder-decoder-head architecture.Despite recentadvancesandcom petitiveresults,DLme thodshaveuntilnownot beenconsid-eredtheun disputedbestapproach intimeseriesforecast ingtasks:Traditional machinelearningappro achesandstatisticalm ethodshaveremainedqu itecom-petitive[20,40]. 0.03
Byconductingalargebe nchmark,wedemonstrat ed,thatourproposedAu to-PyTorch-TSframewo rkisabletooutperform currentstate-of-the- artmethodsonavariety offorecastingdataset sfromdifferentdomainsandeveni mprovesoveratheoreti callyoptimaloracleco mprisedofthebestposs iblebaselinemodelfor eachdataset.Whilewew ereabletoshowsuperio rperformanceoverexis tingmethods,ourresul tssuggest,thatacombi nationofDLapproaches withtraditionalmachi nelearningandstatist icalmethodscouldfurt herimproveperformanc e.Theopti-malsetupfo rsuchaframeworkandho wtobestutilizethesem odelclassessidebysid eposesaninterestingd irectionforfurtherre search.Ourframeworkm akesuseofBOandutiliz esmulti-fidelityoptimizationin ordertointroduceacos t-awarecomponentanda lleviatethecostsincu rredbytheexpensivetr ainingofDLmodels.Our experimentsempirical lydemonstrate,thatth echoiceofbudgettypec anhaveaninfluenceonthequalityoft heoptimizationandult imatelyper-formance. Tothebestofourknowle dgethereiscurrentlyn oresearchconcerningt hechoiceoffidelitywhenutilizingm ulti-fidelityoptimizationfo rarchitecturesearcha ndHPOofDLmodels;notonlyfortimeseries forecasting,butother tasksaswell.Thisprov idesagreatopportunit yforfutureresearchan dcouldfurtherimprove currentstate-of-the- artmethodsalreadyuti lizingmulti-fidelityoptimization.A dditionally,weusedou rextensiveexperiment stoexaminetheimporta nceofhyperparameters inourconfigurationspaceandwere abletoiden-tifysomeo fthecriticalchoicesf ortheconfigurationofDLarchitec turesfortimeseriesfo recasting.Finally,in contrasttopreviousAu toMLsystems,tothebes tofourknowledge,time seriesforecastingist hefirsttask,wherenotonly efficienttrainingisimpor tantbutalsoefficientvalidation.Alth oughweshowedempir-ic alevidencefortheprob lemandtookafirststepinthedirectio nofefficientvalidation,itre mainsanopenchallenge forfuturework.Auto-P yTorch-TScanautomati callyoptimizethehype rparameterconfigurationforagiventas kandcanbeviewedasabe nchmarktoolthatisola testheinfluenceofhyperparamete rconfigurationsofthemodel. Thismakesourframewor kanassettotheresearc hcommunityasitenable sresearcherstoconven ientlycomparetheirme thodstoexistingDLmod els. Byconductingalargebe nchmark,wedemonstrat ed,thatourproposedAu to-PyTorch-TSframewo rkisabletooutperform currentstate-of-the- artmethodsonavariety offorecastingdataset sfromdifferentdomainsandeveni mprovesoveratheoreti callyoptimaloracleco mprisedofthebestposs iblebaselinemodelfor eachdataset.Whilewew ereabletoshowsuperio rperformanceoverexis tingmethods,ourresul tssuggest,thatacombi nationofDLapproaches withtraditionalmachi nelearningandstatist icalmethodscouldfurt herimproveperformanc e.Theopti-malsetupfo rsuchaframeworkandho wtobestutilizethesem odelclassessidebysid eposesaninterestingd irectionforfurtherre search.Ourframeworkm akesuseofBOandutiliz esmulti-fidelityoptimizationin ordertointroduceacos t-awarecomponentanda lleviatethecostsincu rredbytheexpensivetr ainingofDLmodels.Our experimentsempirical lydemonstrate,thatth echoiceofbudgettypec anhaveaninfluenceonthequalityoft heoptimizationandult imatelyper-formance. Tothebestofourknowle dgethereiscurrentlyn oresearchconcerningt hechoiceoffidelitywhenutilizingm ulti-fidelityoptimizationfo rarchitecturesearcha ndHPOofDLmodels;notonlyfortimeseries forecasting,butother tasksaswell.Thisprov idesagreatopportunit yforfutureresearchan dcouldfurtherimprove currentstate-of-the- artmethodsalreadyuti lizingmulti-fidelityoptimization.A dditionally,weusedou rextensiveexperiment stoexaminetheimporta nceofhyperparameters inourconfigurationspaceandwere abletoiden-tifysomeo fthecriticalchoicesf ortheconfigurationofDLarchitec turesfortimeseriesfo recasting.Finally,in contrasttopreviousAu toMLsystems,tothebes tofourknowledge,time seriesforecastingist hefirsttask,wherenotonly efficienttrainingisimpor tantbutalsoefficientvalidation.Alth oughweshowedempir-ic alevidencefortheprob lemandtookafirststepinthedirectio nofefficientvalidation,itre mainsanopenchallenge forfuturework.Auto-P yTorch-TScanautomati callyoptimizethehype rparameterconfigurationforagiventas kandcanbeviewedasabe nchmarktoolthatisola testheinfluenceofhyperparamete rconfigurationsofthemodel. Thismakesourframewor kanassettotheresearc hcommunityasitenable sresearcherstoconven ientlycomparetheirme thodstoexistingDLmod els. 0.02
英語(論文から抽出)日本語訳スコア
14D.Dengetal.Acknowl edgementsDifanDengan dMariusLindauerackno wledgefinancialsupportbytheF ederalMinistryforEco nomicAffairsandEnergyofGerma nyintheprojectCoyPuu nderGrantNo.01MK2100 7L.BerndBischlacknow ledgesfundingbytheGe r-manFederalMinistry ofEducationandResear ch(BMBF)underGrantNo .01IS18036A.FlorianK arlacknowledgessuppo rtbytheBavarianMinis tryofEconomicAffairs,RegionalDevelop mentandEnergythrough theCenterforAn-alyti cs–Data–Applications(ADACent er)withintheframewor kofBAYERNDIGITALII(2 0-3410-2-9-8). 14D.Dengetal.Acknowl edgements DifanDengandMariusLi ndaueracknowledgefin ancial supportbytheFederalM inistryforEconomicAf fairsandEnergyof GermanyintheprojectC oyPuunderGrantNo.01M K21007L.BerndBischla cknowledgesfundingby theGer-manFederalMin istryofEducationandR esearch(BMBF)underGr antNo.01IS18036A.Flo rianKarlacknowledges supportedbytheBavari anMinistryofEconomic Affairs,RegionalDeve lopmentandEnergythro ughtheenter-alytics- apps(Center-Acade-Ac ade-Acade-Acade-Acad e-Acade-Acade-Acade- Acade-Digital-20) 0.04
FrankHutteracknowled gessupportbyEuropean ResearchCouncil(ERC) ConsolidatorGrant“DeepLearning2.0”(grantno.101045765). FrankHutteracknowled ges supportedbyEuropeanR esearchCouncil(ERC)C onsolidatorGrant “DeepLearning2.0” (grantno.101045765) 0.27
References1.Abdelfat tah,M.S.,Mehrotra,A. 参照:Abdelfattah,M.S.,Me hrotra,A。 0.31
,Dudziak, L. ,Dudziak,L。 0.74
,Lane,N.D.:Zero-cost proxiesforlightweigh tNAS.In:ICLR(2021)2. Alexandrov,A. Zero-costproxiesforl ightweightNAS.In:ICL R(2021)2.Alexandrov, A。 0.44
,Benidis,K. 、Benidis,K。 0.76
,Bohlke-Schneider,M. とBohlke-Schneider。 0.29
,Flunkert,V. と、flunkert,v。 0.69
,Gasthaus,J. ,Januschowski,T. ガスタウス、j。 とJanuschowski,T。 0.37
,Maddix,D.C.,Rangapu ram,S. マディックス、d.c.、ランガプラム、s. 0.56
,Salinas,D. 、Salinas,D。 0.77
,Schulz,J. とSchulz,J。 0.77
,Stella,L. とStella,L。 0.73
,T¨urkmen,A.C.,Wang,Y. は、A.C.、Wang、Y。 0.70
:Gluonts:Probabilist icandneuraltimeserie smodelinginpython.Jo urnalofMachineLearni ngResearch21(2020)3. Assimakopoulos,V. :グルーオント:確率的・神経時相モデリングインピトン.JournalofMachineLea rningResearch21(2020 )3.Assimakopoulos,V. 0.31
,Nikolopoulos,K. ニコロポウロス、k。 0.44
:Thethetamodel:adeco mpositionapproachtof orecasting.Internati onaljournalofforecas ting16(4)(2000)4.Bai ,S. Thetamodel:adecompos itionapproachtoforec asting. Internationaljournal offorecasting16(4)(2 000)4.Bai,S. 0.23
,Kolter,J.Z.,Koltun, V. コルター、j.z.、コルトゥン、v. 0.57
:Anempiricalevaluati onofgenericconvoluti onalandrecurrentnetw orksforsequencemodel ing.arXiv:1803.01271 (2018)5.Beitner,J. : Aempiricalevaluation ofgenericconvolution alandrecurrentnetwor ksforsequencemodelin g.arXiv:1803.01271(2 018)5.Beitner,J. 0.10
:PyTorchForecasting: Timeseriesforecastin gwithPyTorch(2020)6. Borovykh,A. :PyTorchForecasting: Timeseriesforecastin gwithPyTorch(2020)6. Borovykh,A. 0.34
,Bohte,S. ,Oosterlee,C.W.:Cond itionaltimeseriesfor ecastingwithconvolut ionalneuralnetworks. arXiv:1703.04691(201 7)7.Box,G.E.,Jenkins ,G.M.,Reinsel,G.C.,L jung,G.M.:Timeseries analysis:fore-castin gandcontrol(2015)8.C aruana,R. とBohte,S。 Oosterlee,C.W.:Condi tionaltimeseriesfore castingwithconvoluti onalneuralnetworks.a rXiv:1703.04691(2017 )7.Box,G.E.,Jenkins, G.M.,Reinsel,G.C.,Lj ung,G.M.:Timeseriesa nalysis:fore-casting andcontrol(2015)8.Ca ruana,R. 0.27
,Niculescu-Mizil,A. 例:niculescu-mizil。 0.46
,Crew,G. ,Ksikes,A. ,Crew,G。 、Ksikes,A。 0.39
:Ensembleselectionfr omlibrariesofmodels. In:Greiner,R. :Ensembleselection fromlibrariesofmodel s.In:Greiner,R。 0.40
(ed.)Proceedingsofth e21stInternationalCo nferenceonMachineLea rning(ICML’04). (ed)Proceedingsofthe 21stInternationalCon ferenceonMachineLear ning(ICML'04) 0.22
Omnipress(2004)9.Cho ,K. Omnipress(2004)9.Cho ,K 0.44
,vanMerrienboer,B. ,vanMerrienboer,B。 0.80
,G¨ul¸cehre,C¸. ,Bahdanau,D. と、Cは言う。 バフダナウ、d。 0.38
,Bougares,F. ,Bougares,F。 0.40
,Schwenk,H. ,Schwenk,H。 0.40
,Bengio,Y. 、Bengio,Y。 0.39
:Learningphraserepre sentationsusingRNNen coder-decoderforstat isticalmachinetransl ation(2014)10.Dahl,S .M.J.:TSPO:anautoMLa pproachtotimeseriesf orecasting.Ph.D. :rnnencoder-decoderf orstatisticalmachine translation(2014)10. dahl,s.m.j.:tspo:ana utomlapproachtotimes eriesforecasting.ph. d 0.14
thesis,UniversidadeN OVAdeLisboa(2020)11. DeLivera,A.M.,Hyndma n,R.J.,Snyder,R.D.:F orecastingtimeseries withcom-plexseasonal patternsusingexponen tialsmoothing.Journa loftheAmericansta-ti sticalassociation106 (496)(2011)12.Dong,X . 論文,UniversidadeNOVAdeL isboa(2020)11.DeLive ra,A.M.,Hyndman,R.J. ,Snyder,R.D.:Forecas tingtimeserieswithco m-plexseasonalpatter nsusingexponentialsm oothing.Journalofthe Americansta-tistical association106(496)( 2011)12.Dong,X。 0.25
,Yang,Y. :NAS-Bench-201:Exten dingthescopeofreprod ucibleneuralarchitec turesearch.In:Procee dingsoftheInternatio nalConferenceonLearn ingRepresentations(I CLR’20)(2020),publishedo nline:iclr.cc13.Doso vitskiy,A. とYang,Y。 :NAS-Bench-201:exten dingthescopeofreprod ucibleneuralarchitec turesearch.In:Procee dingsoftheInternatio nalConferenceonLearn ingRepresentations(I CLR’20)(2020), publishedonline:iclr .cc13.Dosovitskiy,A. 0.41
,Beyer,L. ,Kolesnikov,A. ベイヤー、l。 コレスニコフ(kolesnikov)。 0.54
,Weissenborn,D. とWeissenborn,D。 0.73
,Zhai,X. ,Unterthiner,T. 、Zhai,X。 、Unterthiner,T。 0.39
,Dehghani,M. とDehghani,M。 0.36
,Minderer,M. ,Heigold,G. ミンデラー、m。 平金、g。 0.42
,Gelly,S. ,Uszkoreit,J. とGelly,S。 とUszkoreit,J。 0.71
,Houlsby,N. ,Houlsby,N。 0.40
:Animageisworth16x16 words:Transformersfo rimagerecognitionats cale.In:ICLR(2021) :Animageisworth16x16 words:Transformersfo rimagerecognitionats cale.In:ICLR(2021) 0.20
英語(論文から抽出)日本語訳スコア
EfficientAutomatedDeepLe arningforTimeSeriesF orecasting1514.Elske n,T. 効率的なAutomatedDeepLearnin gforTimeSeriesForeca sting1514.Elsken,T 0.18
,Metzen,J.H.,Hutter, F. ,Metzen,J.H.,Hutter, F。 0.95
:Neuralarchitectures earch.In:AutomaticMa chineLearning:Method s,Systems,Challenges (2019)15.Erickson,N. :Neuralarchitectures earch.In:AutomaticMa chineLearning:Method s,Systems,Challenges (2019)15.Erickson,N. 0.48
,Mueller,J. とMueller,J。 0.73
,Shirkov,A. とShirkov,A。 0.76
,Zhang,H. ,Larroy,P. ,Zhang,H。 ラーロイ、p。 0.43
,Li,M. ,Smola,A. ,Li,M。 ,Smola,A。 0.40
:Autogluon-tabular:R obustandaccurateauto mlforstructureddata. arXiv:2003.06505(202 0)16.Falkner,S. :Autogluon-tabular:R obustandaccurateauto mlforstructureddata. arXiv:2003.06505(202 0)16.Falkner,S. 0.19
,Klein,A. ,Hutter,F. ,Klein,A。 、Hutter,F。 0.59
:BOHB:robustandefficienthyperparametero ptimizationatscale.I n:ICML(2018)17.Feure r,M. :bohb:robustand efficienthyperparame teroptimizationatsca le.in:icml(2018)17.f eurer,m. 0.20
,Hutter,F. 、Hutter,F。 0.77
:Hyperparameteroptim ization.In:Automatic MachineLearning:Meth ods,Systems,Challeng es(2019)18.Feurer,M. : Hyperparameteroptimi zation.In:AutomaticM achineLearning:Metho ds,Systems,Challenge s(2019)18.Feurer,M. 0.46
,Klein,A. ,Eggensperger,K. ,Klein,A。 ,Eggensperger,K。 0.40
,Springenberg,J.T.,B lum,M. スプリングバーグ、j.t.、blum、m. 0.65
,Hutter,F. 、Hutter,F。 0.77
:Efficientandrobustautoma tedmachinelearning.I n:NeurIPS(2015)19.Fl ores,B.E.:Apragmatic viewofaccuracymeasur ementinforecasting.O mega(1986)20.Godahew a,R. Efficientandrobustau tomatedmachinelearni ng.In:NeurIPS(2015)1 9.Flores,B.E.:Apragm aticviewofacacy measuresinforecastin g.Omega(1986)20.Goda hewa,R. 0.25
,Bergmeir,C. とBergmeir,C。 0.73
,Webb,G.I.,Hyndman,R .J.,Montero-Manso,P. ,Webb,G.I.,Hyndman,R .J.,Montero-Manso,P。 0.40
:Monashtimeseriesfor ecastingarchive.In:N eurIPSTrackonDataset sandBench-marks(2021 )21.Halvari,T. :Monashtimeseriesfor ecastingarchive.In:N eurIPSTrackonDataset sandBench-marks(2021 )21.Halvari,T 0.22
,Nurminen,J.K.,Mikko nen,T. ,Nurminen,J.K.,Mikko nen,T。 0.47
:Robustnessofautomlf ortimeseriesforecast inginsensornetworks. In:IFIPNetworkingCon ference(2021)22.Hewa malage,H. In:IFIPNetworkingCon ference(2021)22.Hewa malage,H。 0.32
,Bergmeir,C. とBergmeir,C。 0.73
,Bandara,K. :Recurrentneuralnetw orksfortimeseriesfor ecasting:Currentstat usandfuturedirection s.InternationalJourn alofForecasting(2021 )23.Hochreiter,S. バンダラ、K。 recurrentneuralnetwo rks fortimeseriesforecas ting:currentstatusan dfuturedirections.in ternationaljournalof forecasting(2021)23. hochreiter 0.39
,Schmidhuber,J. とSchmidhuber,J。 0.74
:Longshort-termmemor y.NeuralComput. :Longshort-term memory.NeuralComput 0.26
(1997)24.Hutter,F. (1997)24.Hutter,F。 0.90
,Hoos,H. ,Leyton-Brown,K. ふーん、ふーん。 レイトン=ブラウン、K。 0.54
:Anefficientapproachforasse ssinghyper-parameter importance.In:ICML(2 014)25.Hutter,F. In:ICML(2014)25.Hutt er,F。 0.29
,Hoos,H.H.,Leyton-Br own,K. フーズ、h.h.、レイトン・ブラウン、k. 0.61
:Sequentialmodel-bas edoptimizationforgen eralalgorithmconfiguration.In:Learning andIntelligentOptimi zation(2011)26.Hyndm an,R. :Sequentialmodel-bas edoptimization for generalalgorithmconf iguration.In:Learnin gandIntelligentOptim ization(2011)26.Hynd man,R. 0.21
,Koehler,A.B.,Ord,J. K.,Snyder,R.D.:Forec astingwithexponen-ti alsmoothing:thestate spaceapproach(2008)2 7.Hyndman,R.J.,Athan asopoulos,G. Koehler,A.B.,Ord,J.K .,Snyder,R.D.:Foreca stingwithexponen-tia lsmoothing:thestates paceapproach(2008)27 .Hyndman,R.J.,Athana sopoulos,G。 0.41
:Forecasting:princip lesandpractice(2021) 28.Hyndman,R.J.,Khan dakar,Y. :前編:プリンシパル・アンド・プラクティス(2021)28.hyndman,r.j .,khandakar,y. 0.58
:Automatictimeseries forecasting:theforec astpackageforr.Journ alofstatisticalsoftw are27(2008)29.Hyndma n,R.J.,Koehler,A.B.: Anotherlookatmeasure sofforecastaccuracy. InternationalJournal ofForecasting(2006)3 0.Jamieson,K.G.,Talw alkar,A. Theforecastpackagefo rr.Journalofstatisti calsoftware27(2008)2 9.Hyndman,R.J.,Koehl er,A.B.:Anotherlooka tmeasuresofforecasta ccuracy.Internationa lJournalofForecastin g(2006)30.Jamieson,K .G.,Talwalkar,A。 0.23
:Non-stochasticbesta rmidentificationandhyper-param eteroptimization.In: AISTA(2016)31.Janusc howski,T. :Non-stochasticbesta rmidentification andhyper-parameterop timization.In:AISTA( 2016)31.Januschowski ,T. 0.26
,Gasthaus,J. ,Wang,Y. ガスタウス、j。 とWang,Y。 0.53
,Salinas,D. 、Salinas,D。 0.77
,Flunkert,V. と、flunkert,v。 0.69
,Bohlke-Schneider,M. とBohlke-Schneider。 0.29
,Callot,L. ,Callot,L。 0.81
:Criteriaforclassify ingforecastingmethod s.InternationalJourn alofForecasting36(1) (2020)32.Javeri,I.Y. ,Toutiaee,M. :Criteriaforclassify ingforecastingmethod s.InternationalJourn alofForecasting36(1) (2020)32.Javeri,I.Y. ,Toutiaee,M. 0.21
,Arpinar,I.B.,Miller ,J.A.,Miller,T.W.:Im provingneuralnetwork sfortime-seriesforec astingusingdataaugme ntationandautoml.In: BigDataService(2021) 33.Jin,H. Improvingneuralnetwo rksfortime-seriesfor ecastingusingdataaug mentationandautoml.I n:BigDataService(202 1)33.Jin,H 0.26
,Song,Q. ,Hu,X. とSong,Q。 ,Hu,X。 0.58
:Auto-keras:Anefficientneuralarchitect uresearchsys-tem.In: SIGKDD(2019)34.Klein ,A. :Auto-keras:Aneffici entneuralarchitectur esearchsys-tem.In:SI GKDD(2019)34.Klein,A . 0.24
,Tiao,L. ,Lienart,T. ,Tiao,L。 とLienart,T。 0.56
,Archambeau,C. とArchambeau,C。 0.75
,Seeger,M. ,Seeger,M。 0.40
:Model-basedasyn-chr onoushyperparametera ndneuralarchitecture search(2020)35.Kuria n,J.J.,Dix,M. モデルベースasyn-chronous hyperparameterandneu ralarchitecturesearc h(2020)35.kurian,j.j .,dix,m 0.34
,Amihai,I. ,Ceusters,G. わあみはい。 ,Ceusters,G。 0.58
,Prabhune,A. :BOAT:Abayesianoptim izationautomltime-se riesframeworkforindu strialapplications.I n:Big-DataService(20 21)36.Li,L. プラーブーン、a。 :BOAT:Abayesianoptim izationautomltime-se riesframeworkforindu strialapplications.I n:Big-DataService(20 21)36.Li,L. 0.29
,Jamieson,K.G.,DeSal vo,G. ジェイミーソン、K.G.、DeSalvo、G。 0.74
,Rostamizadeh,A. とRostamizadeh,A。 0.74
,Talwalkar,A. とTalwalkarさん。 0.41
:Hyperband:Anovelban dit-basedapproachtoh yperparameteroptimiz ation.J. :hyperband:anovelban dit-basedapproachto hyperparameteroptimi zation.j 0.29
Mach.Learn.Res. Mach.Learn.Res 0.28
(2017) (2017) 0.43
英語(論文から抽出)日本語訳スコア
16D.Dengetal.37.Li,T . 16D.Dengetal.37.Li,T 0.51
,Zhang,J. ,Bao,K. 、Zhang,J。 バオ、k。 0.66
,Liang,Y. ,Li,Y. とLiang,Y。 とLi,Y。 0.70
,Zheng,Y. :Autost:Efficientneuralarchitect uresearchforspatio-t emporalprediction.In :SIGKDD(2020)38.Lim, B. 、Zheng,Y。 :Autost:Efficientneu ralarchitecturesearc hforspatio-temporalp rediction.In:SIGKDD( 2020)38.Lim,B. 0.50
,Arık,S. ¨O. とArık,S。 という。 0.45
,Loeff,N. ,Pfister,T. とloeff,n。 、Pfister,T。 0.50
:Temporalfusiontrans formersforinter-pret ablemulti-horizontim eseriesforecasting.I nternationalJournalo fForecasting(2021)39 .Lindauer,M. : Temporalfusion Transformersforinter -pretablemulti-Horiz ontimeseriesforecast ing. InternationalJournal ofForecasting(2021)3 9.Lindauer,M. 0.19
,Eggensperger,K. ,Eggensperger,K。 0.40
,Feurer,M. ,Biedenkapp,A. フェーラー、m。 ,Biedenkapp,A。 0.44
,Deng,D. ,Benjamins,C. とDeng,D。 、Benjamins,C。 0.73
,Ruhkopf,T. ,Sass,R. ルコプフ、t。 、Sass,R。 0.41
,Hutter,F. 、Hutter,F。 0.77
:Smac3:Aversatilebay esianoptimizationpac kageforhyperparamete roptimization.Journa lofMachineLearningRe search(2022)40.Makri dakis,S. :smac3:aversatilebay esianoptimizationpac kagefor hyperparameteroptimi zation.journalofmach inelearningresearch( 2022)40.makridakis.s 0.16
,Spiliotis,E. スピリオティス、e。 0.40
,Assimakopoulos,V. ,Assimakopoulos,V。 0.40
:Them4competition:Re sults,findings,conclusionand wayforward.Internati onalJournalofForecas ting(2018)41.Makrida kis,S. Them4competition:Res ults,findings,conclu sionandwayforward.In ternationalJournalof Forecasting(2018)41. Makridakis,S. 0.32
,Spiliotis,E. スピリオティス、e。 0.40
,Assimakopoulos,V. ,Assimakopoulos,V。 0.40
:Them4competition:10 0,000timeseriesand61 forecastingmethods.I nternationalJournalo fForecasting(2020)42 .Meisenbacher,S. Them4competition:100 ,000 timeriesand61forecas tingmethods.Internat ionalJournalofForeca sting(2020)42.Meisen bacher,S. 0.19
,Turowski,M. ,Turowski,M。 0.81
,Phipps,K. 、Phipps,K。 0.80
,R¨atz,M. ,M¨uller,D. とRは言う。 、M suller, D。 0.42
,Hagenmeyer,V. ,Hagenmeyer,V。 0.40
,Mikut,R. と、mikut、r。 0.71
:Reviewofautomatedti meseriesforecastingp ipelines.CoRRabs/220 2.01712(2022),https: //arxiv.org/abs/2202 .0171243.Montero-Man so,P. CoRRabs/2202.01712(2 022),https://arxiv.o rg/abs/2202.0171243. Montero-Manso,P。 0.19
,Athanasopoulos,G. アタナソプーロス、g。 0.37
,Hyndman,R.J.,Talaga la,T.S.:Fforma:Featu re-basedforecastmode laveraging.Internati onalJournalofForecas ting36(1)(2020)44.Ol son,R.S.,Bartley,N. ,Hyndman,R.J.,Talaga la,T.S.:Fforma:Featu re-basedforecastmode laveraging.Internati onalJournalofForecas ting36(1)(2020)44.Ol son,R.S.,Bartley,N. 0.32
,Urbanowicz,R.J.,Moo re,J.H.:Evaluationof atree-basedpipelineo ptimizationtoolforau tomatingdatascience. In:GECCO(2016)45.van denOord,A. Evaluationofatree-ba sedpipelineoptimizat iontoolforautomating datascience.In:GECCO (2016)45.vandenOord, A. 0.23
,Dieleman,S. 、Dieleman,S。 0.40
,Zen,H. ,Simonyan,K. ,Zen,H。 シモニアン、k。 0.41
,Vinyals,O. ,Graves,A. ビニールス、o。 、Graves,A。 0.44
,Kalchbrenner,N. 、Kalchbrenner,N。 0.39
,Senior,A.W.,Kavukcu oglu,K. シニア、A.W.、Kavukcuoglu、K。 0.80
:Wavenet:Agenerative modelforrawaudio.In: ISCASpeechSynthesisW orkshop(2016)46.Ores hkin,B.N.,Carpov,D. :Wavenet:Agenerative modelforrawaudio.In: ISCASpeechSynthesisW orkshop(2016)46.Ores hkin,B.N.,Carpov,D. 0.38
,Chapados,N. ,Bengio,Y. チャパドス、n。 、Bengio,Y。 0.42
:N-BEATS:neuralbasis expansionanalysisfor interpretabletimeser iesforecasting.In:IC LR(2020)47.Paldino,G .M.,DeStefani,J. :n-beats:neuralbasis expansion analysis forinterpretabletime seriesforecasting.in :iclr(2020)47.paldin o,g.m.,destefani,j。 0.21
,DeCaro,F. ,Bontempi,G. デカルロ、f。 とBontempi,G。 0.44
:Doesautomloutperfor mnaiveforecasting? 自動アウトパーフォーマティブ・フォアキャスティングは可能か? 0.30
In:EngineeringProcee dings.vol.5(2021)48. Prokhorenkova,L. In:EngineeringProcee dings.vol.5(2021)48. Prokhorenkova,L。 0.36
,Gusev,G. ,Vorobev,A. とGusev,G。 ヴォロベフ、a。 0.56
,Dorogush,A.V.,Gulin ,A. ,Dorogush,A.V.,Gulin ,A。 0.94
:Catboost:Unbiasedbo ostingwithcategorica lfeatures.In:NeurIPS (2018)49.Real,E. :Catboost:Unbiasedbo ostingwithcategorica lfeatures.In:NeurIPS (2018)49.Real,E. 0.26
,Aggarwal,A. ,Huang,Y. アグガーワル、a。 とHuang,Y。 0.57
,Le,Q.V.:Regularized evolutionforimagecla s-sifierarchitecturesearch .In:AAAI(2019)50.Sal inas,D. Regularizedevolution forimageclas-sifiera rchitecturesearch.In :AAAI(2019)50.Salina s,D. 0.38
,Flunkert,V. と、flunkert,v。 0.69
,Gasthaus,J. ,Januschowski,T. ガスタウス、j。 とJanuschowski,T。 0.37
:Deepar:Probabilisti cforecastingwithauto regressiverecurrentn etworks.Internationa lJournalofFore-casti ng36(3)(2020)51.Shah ,S.Y.,Patel,D. :Deepar:Probabilisti cforecastingwithauto regressiverecurrentn etworks.Internationa lJournalofFore-casti ng36(3)(2020)51.Shah ,S.Y.,Patel,D. 0.18
,Vu,L. ,Dang,X. 、Vu,L。 とDang,X。 0.53
,Chen,B. ,Kirchner,P. チェン、b。 、Kirchner,P。 0.43
,Samulowitz,H. ,Samulowitz,H。 0.40
,Wood,D. ,Bramble,G. 、Wood,D。 ブラムブル、g。 0.48
,Gifford,W.M.,Ganapavarap u,G. Gifford,W.M.,Ganapav arapu,G。 0.85
,Vacul´ın,R. は、Vacul ́ın,R。 0.69
,Zerfos,P. ,Zerfos,P。 0.40
:Autoai-ts:Autoaifor timeseriesforecastin g.In:SIGMOD(2021)52. Talagala,T.S.,Hyndma n,R.J.,Athanasopoulo s,G. :Autoai-ts:Autoaifor timeseriesforecastin g.In:SIGMOD(2021)52. Talagala,T.S.,Hyndma n,R.J.,Athanasopoulo s,G. 0.35
,etal. :Meta-learninghowtof orecasttimeseries.Mo nashEconometricsandB usinessStatisticsWor kingPapers6(2018)53. Thornton,C. だ。 :Meta-learninghowtof orecasttimeries.Mona shEconometricsandBus inessStatisticsWorki ngPapers6(2018)53.Th ornton,C. 0.23
,Hutter,F. 、Hutter,F。 0.77
,Hoos,H.H.,Leyton-Br own,K. フーズ、h.h.、レイトン・ブラウン、k. 0.61
:Auto-weka:combineds e-lectionandhyperpar ameteroptimizationof classificationalgorithms.In: SIGKDD(2013)54.Trape ro,J.R.,Kourentzes,N . :Auto-weka:combineds e-lection andhyperparameteropt imizationofclassific ationalgorithms.In:S IGKDD(2013)54.Traper o,J.R.,Kourentzes,N. 0.18
,Fildes,R. :Ontheidentificationofsalesforecas tingmodelsintheprese nceofpromotions.Jour naloftheOperationalR esearchSociety(2015) 55.Vaswani,A. フィルズ、r。 サレスフォレキャスティングモデルsinpresenceofpromoti ons.journaloftheoper ationalresearchsocie ty(2015)55.vaswani,a . 0.41
,Shazeer,N. とShazeer,N。 0.69
,Parmar,N. ,Uszkoreit,J. パルマー、n。 とUszkoreit,J。 0.60
,Jones,L. ,Gomez,A.N.,Kaiser,L . 、Jones,L。 ゴメス、a.n.、カイザー、l. 0.70
,Polosukhin,I. :Attentionisallyoune ed.In:NeurIPS(2017)5 6.Wen,R. ポロスクヒン、私。 Attentionisallyounee d.In:NeurIPS(2017)56 .Wen,R。 0.40
,Torkkola,K. ,Narayanaswamy,B. トルッコラ、k。 ナラヤナスワミー、b。 0.44
,Madeka,D. とMadeka,D。 0.70
:Amulti-horizonquant ilerecurrentforecast er.31stConferenceonN eurIPS,TimeSeriesWor kshop(2017) : Amulti-Horizonquanti lerecurrentforecaste r.31stConferenceonNe urIPS,TimeSeriesWork shop(2017) 0.20
英語(論文から抽出)日本語訳スコア
EfficientAutomatedDeepLe arningforTimeSeriesF orecasting1757.Wu,B. 効率的なAutomatedDeepLearnin gforTimeSeriesForeca sting1757.Wu,B 0.18
,Li,C. ,Zhang,H. ,Li,C。 ,Zhang,H。 0.60
,Dai,X. ,Zhang,P. ダイ、x。 ,Zhang,P。 0.61
,Yu,M. ,Wang,J. 、Yu,M。 とWang,J。 0.54
,Lin,Y. ,Vajda,P. 、Lin,Y。 、Vajda,P。 0.60
:Fbnetv5:Neuralarchi tecturesearchformult ipletasksinonerun.ar Xiv:2111.10007(2021) 58.Xiao,Y. :Fbnetv5:Neuralarchi tecturesearchformult ipletasksinonerun.ar Xiv:2111.10007(2021) 58.Xiao,Y。 0.16
,Qiu,Y. ,Li,X. 、Qiu,Y。 ,Li,X。 0.39
:Asurveyonone-shotne uralarchitecturesear ch.In:IOPConferenceS eries:MaterialsScien ceandEngineering.vol .750.IOPPublishing(2 020)59.Ying,C. Asurveyonone-shotneu ralarchitecturesearc h.In:IOPConferenceSe ries:MaterialsScienc eandEngineering.vol. 750.IOPPublishing(20 20)59.Ying,C. 0.24
,Klein,A. ,Christiansen,E. ,Klein,A。 Christiansen,E。 0.34
,Real,E. ,Murphy,K. 、Real,E。 、Murphy,K。 0.38
,Hutter,F. 、Hutter,F。 0.77
:NAS-bench-101:Towar dsreproducibleneural architecturesearch.I n:ICML(2019)60.Zela, A. :NAS-bench-101:Towar dsreproducibleneural architecturesearch.I n:ICML(2019)60.Zela, A。 0.20
,Klein,A. ,Falkner,S. ,Klein,A。 、Falkner,S。 0.39
,Hutter,F. 、Hutter,F。 0.77
:Towardsautomateddee plearning:Effi-cientjointneuralarc hitectureandhyperpar ametersearch.In:ICML 2018AutoMLWorkshop(2 018)61.Zimmer,L. 深層学習:effi-cientjointneur alarchitecture and hyperparametersearch .in:icml2018automlwo rkshop(2018)61.zimme r,l。 0.24
,Lindauer,M. ,Lindauer,M。 0.40
,Hutter,F. 、Hutter,F。 0.77
:Auto-pytorchtabular :Multi-fidelitymet-alearningf orefficientandrobustautodl .IEEETransactionsonP atternAnalysisandMac hineIntelligence(202 1)62.Zoph,B. :auto-pytorchtabular :multi-fidelitymet-a learningfor efficiencyandrobusta utodl.ieeetransactio nsonpattern analysis andmachineintelligen ce(2021)62.zoph,b。 0.24
,Vasudevan,V. ヴァセデヴァン、v。 0.50
,Shlens,J. ,Le,Q.V.:Learningtra nsferablearchitectur esforscalableimagere cognition.In:CVPR(20 18) やれやれ、j。 画像認識のためのle,q.v.:learning transferablearchitec tures forscalableimagereco gnition.in:cvpr(2018 ) 0.50
英語(論文から抽出)日本語訳スコア
1AOtherHyperparamete rsinourConfigurationSpaceBesides thechoiceoftheneural architectures,thehyp erparametersappliedt otrainaneuralnetwork alsoplayacrucialrole intheperformanceofth epipeline.Mostofourh yperparametersearchs paceisinheritedfromA uto-PyTorchforclassi fication[?]1.Herewegiveabriefov erviewoftheadditiona lforecasting-customi zedhyperparameters.O urnetworkisexpectedt ogenerateoneofthefol lowingoutputs:distri bu-tion,quantileorsc alar.Networkwithdist ributionoutputistrai nedwithlog-probabili tylosswhileitcanfree lyselectthesortofout putdistribution(here weimplementgaussiana ndstudentTdistributi ons). 1AOtherHyperparamete rsinourConfigurationSpaceBesides thechoiceoftheneural architectures,thehyp erparametersappliedt otrainaneuralnetwork alsoplayacrucialrole intheperformanceofth epipeline.Mostofourh yperparametersearchs paceisinheritedfromA uto-PyTorchforclassi fication[?]1.Herewegiveabriefov erviewoftheadditiona lforecasting-customi zedhyperparameters.O urnetworkisexpectedt ogenerateoneofthefol lowingoutputs:distri bu-tion,quantileorsc alar.Networkwithdist ributionoutputistrai nedwithlog-probabili tylosswhileitcanfree lyselectthesortofout putdistribution(here weimplementgaussiana ndstudentTdistributi ons). 0.03
Networkwithquantileo utputisaskedtogenera teasetofoutputquanti les.Hereweonlyaskthe modeltoforecasttheup perbound,medianvalue andlowerboundoftheta rgetvalueswhilethequ antilesofupperandlow erboundaresetashyper paremeters.Lastbutno tleast,networkswiths calaroutputonlygener ateasinglevalueforea chtimestep.Neverthel ess,networkswithscal aroutputcanbetrained withvariouslossfunct ions,i.e.l1loss,l2lo ss,ormeanabsolutesca lederror(MASE)[?],. Networkwithquantileo utputisaskedtogenera teasetofoutputquanti les.Hereweonlyaskthe modeltoforecasttheup perbound,medianvalue andlowerboundoftheta rgetvalueswhilethequ antilesofupperandlow erboundaresetyperrem eters.Lastbutnotleas t,networkswithscalar outputonlygenerateas inglevalueforeachtim estep.Never yet,networkswithscal aroutputcanbetrained withvariouslossfunct ions, i.l1loss,l2loss,orme anabsolutescalederro r(MASE)[?] 0.05
etc.Duringinferencet ime,weconvertthedist ributioninthefollowi ngways:eitherwetaket hemeanofthedistribut ionasitsscalaroutput ,orwesampleacertaina mountofpointsfromthe distributionandtaket hemeanormedianvalues ofthesamples.Allthes estrategiesareconsid eredashyperparameter sthatwillbeoptimized byouroptimizer.Netwo rkswithquantileandsc alaroutputsimplyfore castwiththeirmedianv alueandscalarvaluere spectively.Weimpleme ntaslidingwindowappr oachtogeneratetheinp utsforallthemodels.T hesizeoftheslidingwi ndowisheavilydepende ntonthetaskathand,th usweconsidertheslidi ngwindowforthetarget tasksasamultipleofon ebasewindowsize.Foll owing[?],wesetthebasewindows izetobetheseasonalit yperiodS(ifavailable )thatisnosmallerthan theforecastinghorizo nHofthetask;ifHisgreaterthanallt hepossibleS,wesimply takethelargestS.Asah yperparameter,thewin dowsizerangesfrombas ewindowsizeto3×basewindowsize.Addit ionally,thelongestse quencethataCNNcanhan dleisrestrictedbyits receptivefield:forTCNmodels,wes implytaketheirmaxima lreceptivefieldasthesizeofthesli dingwindow.Theslidin gwindowapproachresul tsinalargeamountofov erlapbetweendifferentsamples.Toavoid overfittingandreducetraini ngtime,similartoothe rframeworks[?],werestrictthenumber ofbatchesateachepoch :thenumberoftraining sampleinstancesateac hepochthenbecomesafixedvalue:batchsize×numbatches.Wegenerat ethetraininginstance sinthefollowingtwowa ys:eithereachseriesi nthetrainingsetisexp ectedtohavethesameam ountofsamplesorwesam pleeachtimestepacros salltheseriesuniform ly.AsAuto-PyTorchhas alreadyimplementedba tchsizeasoneofitshyp erparameters,wesimpl yaddthenumberofbatch esperepochandsamples trategyasanadditiona lsetofhyperparameter s.1https://github.co m/automl/Auto-PyTorc harXiv:2205.05511v2 [cs.LG] 13 May 2022 etc.Duringinferencet ime,weconvertthedist ributioninthefollowi ngways:eitherwetaket hemeanofthedistribut ionasitsscalaroutput ,orwesampleacertaina mountofpointsfromthe distributionandtaket hemeanormedianvalues ofthesamples.Allthes estrategiesareconsid eredashyperparameter sthatwillbeoptimized byouroptimizer.Netwo rkswithquantileandsc alaroutputsimplyfore castwiththeirmedianv alueandscalarvaluere spectively.Weimpleme ntaslidingwindowappr oachtogeneratetheinp utsforallthemodels.T hesizeoftheslidingwi ndowisheavilydepende ntonthetaskathand,th usweconsidertheslidi ngwindowforthetarget tasksasamultipleofon ebasewindowsize.Foll owing[?],wesetthebasewindows izetobetheseasonalit yperiodS(ifavailable )thatisnosmallerthan theforecastinghorizo nHofthetask;ifHisgreaterthanallt hepossibleS,wesimply takethelargestS.Asah yperparameter,thewin dowsizerangesfrombas ewindowsizeto3×basewindowsize.Addit ionally,thelongestse quencethataCNNcanhan dleisrestrictedbyits receptivefield:forTCNmodels,wes implytaketheirmaxima lreceptivefieldasthesizeofthesli dingwindow.Theslidin gwindowapproachresul tsinalargeamountofov erlapbetweendifferentsamples.Toavoid overfittingandreducetraini ngtime,similartoothe rframeworks[?],werestrictthenumber ofbatchesateachepoch :thenumberoftraining sampleinstancesateac hepochthenbecomesafixedvalue:batchsize×numbatches.Wegenerat ethetraininginstance sinthefollowingtwowa ys:eithereachseriesi nthetrainingsetisexp ectedtohavethesameam ountofsamplesorwesam pleeachtimestepacros salltheseriesuniform ly.AsAuto-PyTorchhas alreadyimplementedba tchsizeasoneofitshyp erparameters,wesimpl yaddthenumberofbatch esperepochandsamples trategyasanadditiona lsetofhyperparameter s.1https://github.co m/automl/Auto-PyTorc harXiv:2205.05511v2 [cs.LG] 13 May 2022 0.04
英語(論文から抽出)日本語訳スコア
2NeuralNetworksworkb estiftheirinputvalue isbounded.However,un liketabulardatasetsw hereallthedataissamp ledfromthesamedistri bution,thescalesofea chseriesinthesamedat asetcanbediverselydi stributed.Addition-a lly,eventhedatainsid eeachindividualserie smightnotbestationar y,i.e.,thedistributi onofthetestsetmightn olongerstayintherang eofthetrain-ing/vali dationsequences.Thus ,similarto[?],weonlynormalizethed atainsideeachminibat chsuchthattheinputof thenetworkiskeptinar easonablerange.Simil artootherAutoMLtools [?],datacanbescaledindi fferentwayswhereasthes calingmethodisconsid eredasahyperparamete r.BHyperparameterImp ortanceforeachDatase tInsection4.2,wecomp utetheimportanceofal lhyperparametersover allthedatasets,showi ngthatnosinglearchit ecturedominatestheop timizationprocess.He rewewillstudythehype rparmeterimportancew ithrespecttoeachindi vidualdatasetandeval uatetheimportanceofe achhyperparameter.Ad iverseselectionoffou rdatasetsispresented inFigure1.Hereweshow thehyperparaemterimp ortanceonthehighestb udget(1.0). 2NeuralNetworksworkb estiftheirinputvalue isbounded.However,un liketabulardatasetsw hereallthedataissamp ledfromthesamedistri bution,thescalesofea chseriesinthesamedat asetcanbediverselydi stributed.Addition-a lly,eventhedatainsid eeachindividualserie smightnotbestationar y,i.e.,thedistributi onofthetestsetmightn olongerstayintherang eofthetrain-ing/vali dationsequences.Thus ,similarto[?],weonlynormalizethed atainsideeachminibat chsuchthattheinputof thenetworkiskeptinar easonablerange.Simil artootherAutoMLtools [?],datacanbescaledindi fferentwayswhereasthes calingmethodisconsid eredasahyperparamete r.BHyperparameterImp ortanceforeachDatase tInsection4.2,wecomp utetheimportanceofal lhyperparametersover allthedatasets,showi ngthatnosinglearchit ecturedominatestheop timizationprocess.He rewewillstudythehype rparmeterimportancew ithrespecttoeachindi vidualdatasetandeval uatetheimportanceofe achhyperparameter.Ad iverseselectionoffou rdatasetsispresented inFigure1.Hereweshow thehyperparaemterimp ortanceonthehighestb udget(1.0). 0.03
Fig.1:Hyperparameter imortanceplotsbasedo nfANOVAresultsforfou rdatasets:”Hospital”,”M4quarterly”,”Electricityweekly”and”ElectricityHourly”. 図1: Hyperparameterimorta nceplots basedonfANOVAresults forfourdatasets: “Hospital”, “M4 quarterly”, “Electricityweekly”, “ElectricityHourly” 0.32
Itcanbeseenthatarchi tecture-relatedhyper parametersareamongth emostimportanthyperp arametersforindividu altasks.Whiledifferenttasksassigndif- ferentimportancevalu estodifferentarchitectures.T oshedabitoflightonth eimpactofdatadistrib utiononhyperparamete rimportance,wecompar e”Elec-tricityWeekly”and”ElectricityHourly”side-by-side.Evencom paringthesetwodatase tswithsimilarfeature sfromthesamedomain,d ifferencesinhyperparam- eterimportanceandpre ferredarchitecturesc anbeobserved.Bothtas ksconsider itcanbeseenthearchit ecture-related hyperparametersaream ongthemost significant hyperparametersforin dividualtasks.whiled ifferenttasksassignd if-ferentimportancev aluestodifferentarch itectures.toshedabit oflightontheimpactof datadistributiononhy perparameterimportan ce,wecompare"elec-tricityweekly&q uot;side-by-side.evencom paringthesetwodatase tswith similarfeaturesfromt hesamedomain,differe ncesinhyperparam-ete rimportance andpreferferredarchi tecturescanbeobserve d.bothtasksconsider. itcanbeseenentoarchi tecture-relatedhyper parametersareamong. 0.05
英語(論文から抽出)日本語訳スコア
3thehyperparametersf romTransformerasthem ostimportanthyperpar ameters.However,”ElectricityWeekly”preferMLPasitssecond importantarchitectur eswhile”ElectricityHourly”selectthehyperparame tersfromTCN,showingt hatevenifthedataissa mpledfromthesamedist ribution,thesamplefr equencymightinfluencethechoiceoftheo ptimalarchitecture.C FurtherResultonAblat ionStudyFig.2:Valida tionlossesovertimewi thdifferentmulti-fidelityapproaches.Wec omputetheareaundercu rves(AUC)ofourapproa ch(PE)andnaivemulti- fidelityoptimizer(FE)a ndattachtheminthefigure 3thehyperparametersf romTransformerasthem ostimportanthyperpar ameters.However,”ElectricityWeekly”preferMLPasitssecond importantarchitectur eswhile”ElectricityHourly”selectthehyperparame tersfromTCN,showingt hatevenifthedataissa mpledfromthesamedist ribution,thesamplefr equencymightinfluencethechoiceoftheo ptimalarchitecture.C FurtherResultonAblat ionStudyFig.2:Valida tionlossesovertimewi thdifferentmulti-fidelityapproaches.Wec omputetheareaundercu rves(AUC)ofourapproa ch(PE)andnaivemulti- fidelityoptimizer(FE)a ndattachtheminthefigure 0.10
英語(論文から抽出)日本語訳スコア
4InSection4.3,weshow thatourproxy-evaluat ionapproachhelpstoac hieveabetterany-time performanceonthe”Dominick”dataset.Weshowtheres ulton2otherdatasets( ”KaggleWebTrafficWeekly”and”M4Monthly”)inFigure2. 4InSection4.3,weshow thatourproxy-evaluat ionapproachhelpstoac hieveabetterany-time performanceonthe&quo t;Dominick"dataset.Weshowtheres ulton2otherdatasets( "KaggleWebTrafficWeek ly"and"M4Monthly")inFigure2.com 0.20
                                           ページの最初に戻る

翻訳にはFugu-Machine Translatorを利用しています。