The con­fer­ence is go­ing to take place at Eötvös Loránd Uni­ver­sity, Bu­dapest, Hun­gary Pázmány Péter Sétány 1/​A, Har­mony Hall.

Rama Cont (Uni­ver­sity of Ox­ford)

Uni­ver­sal fea­ture of in­tra­day price for­ma­tion: per­spec­tives from Deep Learn­ing

Us­ing a large-scale Deep Learn­ing ap­proach ap­plied to a high-fre­quency data­base con­tain­ing bil­lions of elec­tronic mar­ket quotes and trans­ac­tions for US eq­ui­ties, we un­cover non­para­met­ric ev­i­dence for the ex­is­tence of a uni­ver­sal and sta­tion­ary price for­ma­tion mech­a­nism re­lat­ing the dy­nam­ics of sup­ply and de­mand for a stock, as re­vealed through the or­der book, to sub­se­quent vari­a­tions in its mar­ket price. We as­sess the model by test­ing its out-of-sam­ple pre­dic­tions for the di­rec­tion of price moves given the his­tory of price and or­der flow, across a wide range of stocks and time pe­ri­ods. The uni­ver­sal price for­ma­tion model is shown to ex­hibit a re­mark­ably sta­ble out-of-sam­ple pre­dic­tion ac­cu­racy across time, for a wide range of stocks from dif­fer­ent sec­tors. In­ter­est­ingly, these re­sults also hold for stocks which are not part of the train­ing sam­ple, show­ing that the re­la­tions cap­tured by the model are uni­ver­sal and not as­set-spe­cific.

The uni­ver­sal model – trained on data from all stocks – out­per­forms, in terms of out-of-sam­ple pre­dic­tion ac­cu­racy, as­set-spe­cific lin­ear and non­lin­ear mod­els trained on time se­ries of any given stock, show­ing that the uni­ver­sal na­ture of price for­ma­tion weighs in favour of pool­ing to­gether fi­nan­cial data from var­i­ous stocks, rather than de­sign­ing as­set or sec­tor-spe­cific mod­els as com­monly done. Stan­dard data nor­mal­iza­tions based on volatil­ity, price level or av­er­age spread, or par­ti­tion­ing the train­ing data into sec­tors or cat­e­gories such as large/​small tick stocks, do not im­prove train­ing re­sults. On the other hand, in­clu­sion of price and or­der flow his­tory over many past ob­ser­va­tions is shown to im­prove fore­cast­ing per­for­mance, show­ing ev­i­dence of path-de­pen­dence in price dy­nam­ics.

Slides

Paolo Giu­dici (Uni­ver­sity of Pavia)

Net­work mod­els for ro­bot-ad­vi­sory as­set al­lo­ca­tion

Au­to­mated dig­i­tal con­sul­tancy plat­forms (“Ro­bot ad­vi­sors”) re­duce costs and im­prove the qual­ity of the per­ceived ser­vice, speed­ing it up and mak­ing user in­volve­ment more trans­par­ent. These im­prove­ments are of­ten off­set by risk clas­si­fi­ca­tion mod­els that are sim­pler than those em­ployed in tra­di­tional con­sul­tancy. We aim to ex­ploit the large amount of data avail­able to ro­bot ad­vi­sors to build port­fo­lios that are bet­ter fit to the risk pro­files of the in­vestors. This is made pos­si­ble, on one hand, con­struct­ing groups of ho­mo­ge­neous risk pro­files, based on user re­sponses to MI­FID ques­tion­naires and, on the other hand, con­struct­ing ho­mo­ge­neous clus­ters of fi­nan­cial as­sets, based on their risk and re­turn per­for­mance. We demon­strate that ma­chine learn­ing meth­ods and, specif­i­cally, net­work mod­els, can be used to “au­toma­tise” the pre­vi­ous clas­si­fi­ca­tions and, even­tu­ally, to as­sess whether an in­vestor’s port­fo­lio matches its risk pro­file. We will ap­ply the pro­posed method­ol­ogy to “clas­si­cal” port­fo­lios, based on ex­change traded funds, and to “hy­brid” port­fo­lio that con­tain bit­coin as­sets, traded in dif­fer­ent ex­change mar­kets, for which there is the ad­di­tional is­sue of find­ing which mar­ket dri­ves the price.

Slides

György Ot­tuc­sák (Mor­gan Stan­ley)

Ma­chine Learn­ing in In­ter­est Rate Mar­kets

Gov­ern­ment bond trad­ing is go­ing through un­prece­dented change to­day, the role of tra­di­tional (mostly voice broking) bond trad­ing model is de­creas­ing due to the elec­tri­fi­ca­tion of the mar­kets. The elec­tri­fi­ca­tion means more re­li­able and good qual­ity data that give a solid foun­da­tion to the ap­pli­ca­tion of ad­vanced sta­tis­ti­cal ap­proaches.

Our aim is to high­light ap­pli­ca­tions in bond trad­ing where ma­chine learn­ing tech­niques of­fer im­prove­ments.

Lás­zló Uj­falusy- At­tila Fekete (Mor­gan Stan­ley)

Credit risk early warn­ing sys­tem based on sen­ti­ment analy­sis

Re­cently more and more in­vestors have been ex­tract­ing in­for­ma­tion from news ar­ti­cles au­to­mat­i­cally in or­der to mod­ify their in­vest­ment de­ci­sions. The us­age of sim­i­lar tech­niques for risk man­age­ment is less usual. Our goal is to show how au­to­mated text analy­sis can be tai­lored to ex­tract credit-re­lated in­for­ma­tion and how such a sys­tem can make the work of credit cov­er­age an­a­lysts eas­ier and more ef­fi­cient.

Gá­bor Pet­ne­házi (Uni­ver­sity of De­bre­cen)

Ex­plor­ing the pre­dictabil­ity of range-based volatil­ity es­ti­ma­tors us­ing RNNs

We have stud­ied the pre­dictabil­ity of dif­fer­ent range-based stock volatil­ity es­ti­ma­tors. Our aim was to com­pare the ac­cu­ra­cies of a sim­ple di­rec­tion-of-change fore­cast­ing frame­work ap­plied to dif­fer­ent volatil­ity es­ti­mates. Re­cur­rent neural net­works were ap­plied to make the pre­dic­tions. We pre­sent em­pir­i­cal re­sults for all 30 con­stituents of the Dow Jones In­dus­trial Av­er­age in­dex. Ac­cord­ing to our re­sults, range-based volatil­ity es­ti­mates can eas­ily be pre­dicted to some de­gree of ac­cu­racy, while the so pop­u­lar close-to-close es­ti­ma­tor (the stan­dard de­vi­a­tion of daily log re­turns) seemed es­sen­tially un­pre­dictable.

arXiv:1803.07152

Josef Te­ich­mann (ETH Zürich)

Ma­chine Learn­ing in Math­e­mat­i­cal Fi­nance

We ex­plain two suc­cess­ful re­cent ap­pli­ca­tions of ma­chine learn­ing tech­niques in math­e­mat­i­cal Fi­nance, namely learn­ing al­go­rithms for cal­i­bra­tion func­tion­als and solv­ing a real world risk man­age­ment prob­lem (based on joint works with Hans Buehler, Christa Cuchiero, Lukas Gonon, Wahid Khos­rawi-Sardroudi, and Ben Wood). Sev­eral out­looks to­wards new di­rec­tions are also pre­sented.

Slides

Lás­zló Györfi (BMGE)

Ma­chine learn­ing ag­gre­ga­tion of port­fo­lio se­lec­tion strate­gies

This talk pro­vides a sur­vey of dis­crete time, multi pe­riod, se­quen­tial in­vest­ment strate­gies for fi­nan­cial mar­kets. Un­der mem­o­ry­less as­sump­tion on the un­der­ly­ing process gen­er­at­ing the as­set prices, the log-op­ti­mal port­fo­lio achieves the max­i­mal as­ymp­totic av­er­age growth rate. For gen­eral dy­namic port­fo­lio se­lec­tion, when as­set prices are gen­er­ated by a sta­tion­ary and er­godic process, growth op­ti­mal em­pir­i­cal strate­gies are shown, where some prin­ci­ples of non­para­met­ric re­gres­sion es­ti­ma­tion and of ma­chine learn­ing ag­gre­ga­tion are ap­plied. The em­pir­i­cal per­for­mance of the meth­ods is il­lus­trated for NYSE data.

Horváth Blanka (King’s Col­lege and Im­pe­r­ial Col­lege Lon­don)

Learn­ing Rough Volatil­ity

Cal­i­bra­tion time be­ing the bot­tle­neck for mod­els with rough volatil­ity, we pre­sent ways for sub­stan­tial speed-ups, along every step of the cal­i­bra­tion process: In a first step we de­scribe a pow­er­ful nu­mer­i­cal scheme (based on func­tional cen­tral limit the­o­rems) for pric­ing a large fam­ily of rough volatil­ity mod­els. In a sec­ond step we dis­cuss var­i­ous ma­chine learn­ing meth­ods that sig­nif­i­cantly re­duce cal­i­bra­tion time for these mod­els. By si­mul­ta­ne­ously cal­i­brat­ing sev­eral (clas­si­cal and rough) mod­els to mar­ket data, we re-con­firm as a byprod­uct of our cal­i­bra­tion re­sults, that volatil­ity is rough, cal­i­bra­tion per­for­mance be­ing best for very small Hurst pa­ra­me­ters in a mul­ti­tude of mar­ket sce­nar­ios.

Slides

Leonardo Ferro (Mor­gan Stan­ley)

De­riv­a­tives pric­ing us­ing Neural Net­works

An al­ge­braic ap­proach for de­riv­a­tives pric­ing us­ing feed­for­ward neural net­works is pre­sented. The ap­proach is char­ac­ter­ized by fast ex­e­cu­tion speed and bet­ter gen­er­al­iza­tion prop­er­ties than con­tem­po­rary op­ti­miza­tion tech­niques. Po­ten­tial ap­pli­ca­tion to dy­namic ini­tial mar­gin mod­el­ling is dis­cussed.

Gá­bor Fáth (Mor­gan Stan­ley)

Pric­ing de­riv­a­tives with Ten­sor­flow

Ten­sor­flow, Google’s ma­chine learn­ing pack­age has been de­vel­oped for deep neural net­work mod­el­ing. Less known is, how­ever, that the plat­form brings great fea­tures for de­riv­a­tives pric­ing too given its em­bed­ded AD (au­to­matic dif­fer­en­ti­a­tion) ca­pa­bil­i­ties as will be il­lus­trated by a cou­ple of ex­am­ples in the talk.

Pro­gram com­mit­tee:

  • L. Márkus (chair),
  • N.M. Arató,
  • J. Gáll,
  • Gy. Michalet­zky,
  • G. Mol­nár-Sáska,
  • V. Prokaj,
  • M. Rá­sonyi

Lo­cal or­gan­is­ers:

  • A. Zem­pléni (chair),
  • Á. Back­hausz,
  • V. Csiszár

email:

The main spon­sor is This work­shop is also sup­ported by
Registration form