The con­fer­ence is go­ing to take place at Aula Magna, Fac­ulty of Law, Egyetem tér 1-3., dis­trict 5, Bu­dapest, Hun­gary, H-1053.

Pro­gram:

9:00-09:10
Open­ing re­marks
9:10-10:00
Luis Seco (Uni­ver­sity of Toronto):
The Dawn of the So­cial Fin­tech; AI, Sus­tain­abil­ity and So­cial Sci­ence
10:00-10:25
An­tal Jakovác (Wigner In­sti­tute):
Un­cov­er­ing Hid­den Laws in Time Se­ries
10:25-10:45
Cof­fee break
10:45-11:10
Tamás Török (Mor­gan Stan­ley):
Lever­ag­ing ML to Trans­form the Ad­vi­sor and Client ex­pe­ri­ence
11:10-11:35
Balázs Ud­vari (MSCI):
Ex­plor­ing the use of ma­chine learn­ing in port­fo­lio op­ti­miza­tion
11:35-12:00
Máte Tóth (Black­rock):
Com­pany Sim­i­lar­ity us­ing Large Lan­guage Mod­els
12:00-12:25
Fer­enc Bodon (KX):
From time­series analy­sis to KDB.AI
12:25-13:00
Lunch break
13:00-14:00
Juho Kan­ni­ainen (Tam­pere Uni­ver­sity):
Re­in­force­ment Learn­ing in Em­pir­i­cal Deep Hedg­ing
14:00-14:20
Lás­zló Márkus (Eötvös Uni­ver­sity):
Deep learn­ing the Hurst pa­ra­me­ter of frac­tional processes; its re­li­a­bil­ity and ef­fect on op­tion pric­ing
14:20-14:40
Gá­bor Fáth (Eötvös Uni­ver­sity):
Frac­tional time se­ries from quan­tum me­chan­ics
14:40-15:00
Thomas Fouret (Citi):
Hedg­ing earn­ings sur­prises
15:00
Clos­ing

Juho Kan­ni­ainen (Tam­pere Uni­ver­sity)

Re­in­force­ment Learn­ing in Em­pir­i­cal Deep Hedg­ing

Ex­ist­ing hedg­ing strate­gies are typ­i­cally based on spe­cific fi­nan­cial mod­els: ei­ther the strate­gies are di­rectly based on a given op­tion pric­ing model or stock price and volatil­ity mod­els are used in­di­rectly by gen­er­at­ing syn­thetic data on which an agent is trained with re­in­force­ment learn­ing. In this re­search, we train an agent in a pure data-dri­ven man­ner. Par­tic­u­larly, we do not need any spec­i­fi­ca­tions on volatil­ity or jump dy­nam­ics but use large em­pir­i­cal in­tra-day data from ac­tual stock and op­tion mar­kets. The agent is trained for the hedg­ing of de­riv­a­tive se­cu­ri­ties us­ing deep re­in­force­ment learn­ing (DRL) with con­tin­u­ous ac­tions. The train­ing data con­sists of in­tra-day op­tion price ob­ser­va­tions on S&P500 in­dex over 6 years, and top of that, we use other data pe­ri­ods for val­i­da­tion and test­ing. We have two im­por­tant em­pir­i­cal re­sults. First, a DRL agent trained us­ing syn­thetic data gen­er­ated from a cal­i­brated sto­chas­tic volatil­ity model out­per­forms the clas­sic Black-Sc­holes delta hedg­ing strat­egy. Sec­ond, and more im­por­tantly, we find that a DRL agent, which is em­pir­i­cally trained us­ing ac­tual in­tra-day stock and op­tion prices di­rectly with­out the prior spec­i­fi­ca­tion of the un­der­ly­ing volatil­ity or jump processes, has su­pe­rior per­for­mance com­pared with the use of syn­thetic data. This im­plies that DRL can cap­ture the dy­nam­ics of S&P500 from the ac­tual in­tra-day data and to self-learn how to hedge ac­tual op­tions ef­fi­ciently.

pa­per

slides

Luis Seco (Uni­ver­sity of Toronto)

The Dawn of the So­cial Fin­tech; AI, Sus­tain­abil­ity and So­cial Sci­ence

There is a long stand­ing be­lief that a dis­ci­pline be­comes a sci­ence when it can be math­e­ma­tized. With the ad­vent of AI, the con­cept of sci­ence is widely ex­panded, with so­cial sci­ence be­ing a likely win­ner. This talk will pre­sent a pos­si­ble view of the fu­ture of a largely ex­panded fi­nan­cial sec­tor drawn by tech­nol­ogy and how so­cial sci­ence can be­come a new part­ner.

slides

An­tal Jakovác (Wigner In­sti­tute)

Un­cov­er­ing Hid­den Laws in Time Se­ries

In the talk a novel method is pre­sented to ex­tract rel­e­vant fea­ture in­for­ma­tion for time se­ries. We find pat­terns in the form of (lin­ear) laws, and char­ac­ter­ize the time se­ries with the best fit­ting laws. The re­sult­ing Lin­ear Law based fea­ture Trans­for­ma­tion (LLT) makes clas­si­fi­ca­tion tasks more ef­fec­tive, as it will be demon­strated in some ex­am­ples, in­clud­ing fi­nan­cial data analy­sis.

slides

Fer­enc Bodon (KX)

From time se­ries analy­sis to KDB.AI

q/​kdb+ stands as the pre­em­i­nent time se­ries analy­sis tool in the global cap­i­tal mar­ket, boast­ing un­ri­valed speed and ef­fi­ciency over the past three decades. Its dis­tin­guish­ing fea­tures, in­clud­ing vec­tor and func­tional pro­gram­ming, along­side na­tive data ta­bles with an ex­tended SQL, have ce­mented it as a foun­da­tional lan­guage for quan­ti­ta­tive an­a­lysts. In 2022, KX in­tro­duced PyKX, a seam­less in­te­gra­tion tool that em­pow­ers Python de­vel­op­ers to har­ness the power of q/​kdb+ with­out ne­ces­si­tat­ing q pro­fi­ciency. Fur­ther­more, the in­tro­duc­tion of KDB.AI rev­o­lu­tion­izes knowl­edge-based vec­tor data­bases, en­abling de­vel­op­ers to con­struct scal­able, re­li­able and real-time ap­pli­ca­tions by pro­vid­ing ad­vanced search, rec­om­men­da­tion and per­son­al­iza­tion for AI ap­pli­ca­tions.

Máte Tóth (Black­rock)

Com­pany Sim­i­lar­ity us­ing Large Lan­guage Mod­els

Iden­ti­fy­ing com­pa­nies with sim­i­lar pro­files is a core task in fi­nance with a wide range of ap­pli­ca­tions in port­fo­lio con­struc­tion, as­set pric­ing and risk at­tri­bu­tion. When a rig­or­ous de­f­i­n­i­tion of sim­i­lar­ity is lack­ing, fi­nan­cial an­a­lysts usu­ally re­sort to ‘tra­di­tion­al’ in­dus­try clas­si­fi­ca­tions such as Global In­dus­try Clas­si­fi­ca­tion Sys­tem (GICS) which as­sign a unique cat­e­gory to each com­pany at dif­fer­ent lev­els of gran­u­lar­ity. Due to their dis­crete na­ture, though, GICS clas­si­fi­ca­tions do not al­low for rank­ing com­pa­nies in terms of sim­i­lar­ity. In this pa­per, we ex­plore the abil­ity of pre-trained and fine­tuned large lan­guage mod­els (LLMs) to learn com­pany em­bed­dings based on the busi­ness de­scrip­tions re­ported in SEC fil­ings. We show that we can re­pro­duce GICS clas­si­fi­ca­tions us­ing the em­bed­dings as fea­tures. We also bench­mark these em­bed­dings on var­i­ous ma­chine learn­ing and fi­nan­cial met­rics and con­clude that the com­pa­nies that are sim­i­lar ac­cord­ing to the em­bed­dings are also sim­i­lar in terms of fi­nan­cial per­for­mance met­rics in­clud­ing re­turn cor­re­la­tion.

Tamás Török (Mor­gan Stan­ley)

Lever­ag­ing ML to Trans­form the Ad­vi­sor and Client ex­pe­ri­ence

In­tro­duc­tion to Mor­gan Stan­ley Wealth Man­age­ment jour­ney to build an In­tel­li­gent or­ga­ni­za­tion. In the past years we fo­cused on scal­ing up our Ma­chine Learn­ing ca­pa­bil­i­ties, build­ing sev­eral tools to help the busi­ness. I will walk through the au­di­ence on 3 prac­ti­cal ac­tual use-Case where Ma­chine Learn­ing was im­ple­mented to data prod­ucts solv­ing the prob­lem on dig­i­tal client en­gage­ment, Ad­vi­sors en­gage­ment to­wards clients and on how to match ad­vi­sors bet­ter with clients. We will cover the busi­ness prob­lem, the con­cept of the so­lu­tion and fi­nally the key learn­ing for each use-cases.

Lás­zló Márkus and Dániel Boros (Eötvös Uni­ver­sity)

Deep learn­ing the Hurst pa­ra­me­ter of frac­tional processes; its re­li­a­bil­ity and ef­fect on op­tion pric­ing

We train a scale-free LSTM-type neural net­work on a mas­sive amount of frac­tional Brown­ian mo­tion or frac­tional Orn­stein Uh­len­beck process tra­jec­to­ries to learn the Hurst ex­po­nent of those processes. While the net­work’s per­for­mance is ex­cel­lent in terms of the mean squared er­ror, the ab­solute and rel­a­tive er­ror quan­tiles are sub­stan­tial due to a skewed dis­tri­b­u­tion. True, though, the net­work still over­per­forms the tra­di­tional sta­tis­ti­cal Hurst es­ti­ma­tors. There is a line in the lit­er­a­ture ad­vo­cat­ing for a frac­tional-Brown­ian-mo­tion-based mod­el­ing of the S&P500 in­dex. By con­di­tion­ally ac­cept­ing that model, we il­lus­trate the ef­fect of the net­work’s mis­spec­i­fi­ca­tion of the Hurst ex­po­nent on op­tion pric­ing. We pre­sent the ac­tual cal­cu­la­tions on two-days-to-ma­tu­rity call prices of Nov 3, 2023.

slides

Balázs Ud­vari (MSCI)

Ex­plor­ing the use of ma­chine learn­ing in port­fo­lio op­ti­miza­tion

Port­fo­lio man­agers of­ten need to solve op­ti­miza­tion prob­lems to de­ter­mine the ideal al­lo­ca­tion of their man­aged ac­counts. De­pend­ing on their strat­egy of choice this can in­volve deal­ing with dif­fi­cult math­e­mat­i­cal mod­els. Tra­di­tion­ally these are han­dled by solv­ing a se­quence of eas­ier sub­prob­lems, where each sub­prob­lem is de­ter­mined by a heuris­tic step based on in­for­ma­tion al­ready ob­tained dur­ing the process. Re­cently there is more and more in­ter­est in ap­ply­ing ma­chine learn­ing meth­ods to deal with these prob­lems. In the talk, we will give an overview of cer­tain op­ti­miza­tion prob­lems and dis­cuss some ways ma­chine learn­ing can play a part in the process of solv­ing them.

Gá­bor Fáth (Eötvös Uni­ver­sity)

Frac­tional processes from quan­tum me­chan­ics

We map one-di­men­sional quan­tum sys­tems to clas­si­cal time se­ries and ex­plore how strong quan­tum cor­re­la­tions turn into non­triv­ial clas­si­cal time au­to­cor­re­la­tions. In par­tic­u­lar we show that the Lut­tinger liq­uid prop­er­ties of quan­tum mag­nets trans­late into mul­ti­frac­tal time se­ries char­ac­ter­ized by non-triv­ial Hurst ex­po­nents. We show that the clas­si­cal se­ries can be sam­pled se­quen­tially from the known or nu­mer­i­cally de­ter­mined Ma­trix Prod­uct State ap­prox­i­ma­tion of the quan­tum ground state.

slides

Thomas Fouret (Citi)

Hedg­ing earn­ings sur­prises

Citi has de­vel­oped a ma­chine learn­ing model to pre­dict vol sur­face de­for­ma­tion sce­nar­ios on quar­terly earn­ings of US sin­gle stocks de­riv­a­tives mar­ket giv­ing fast mar­ket color to traders.

The work­shop is free for reg­is­tered par­tic­i­pants. You can reg­is­ter un­til Nov 19, 2023. If you want to can­cel your reg­is­tra­tion con­tact the or­ga­niz­ers at riskconf@ttk.elte.hu.

With my reg­is­tra­tion I give con­sent to and per­mit im­age- and sound-record­ing to be taken of me on the event, and these record­ings to be used by the or­ga­nizer(s) in their in­ter­nal and ex­ter­nal com­mu­ni­ca­tions (e.g. with aims as re­port­ing and giv­ing in­for­ma­tion about the event, prop­a­gat­ing/​pub­li­ciz­ing the event, us­ing them as ref­er­ence).

These record­ings of me can be used for the above men­tioned goals by any me­dia provider free of charge, with­out any place or time lim­i­ta­tion, through any tech­nol­ogy suit­able for broad­cast­ing to the pub­lic, with­out any lim­i­ta­tions re­gard­ing the num­ber of times be­ing used, and through every known uti­liza­tion method stated in the Act LXXVI of 1999 on Copy­right.

With my reg­is­tra­tion, I give per­mit to store and use my data dur­ing the or­ga­ni­za­tion of the cur­rent and fu­ture work­shops. These data will not be shared with any third par­ties.

Pro­gram com­mit­tee:

  • G. Fáth
  • L. Márkus
  • G. Mol­nár-Sáska
  • A. Zem­pléni

Lo­cal Or­gan­is­ers:

  • Á. Back­hausz
  • V. Csiszár
  • V. Prokaj
  • A. Zem­pléni (chair)

email:

ELTE Risklab

Spon­sors:
Registration form