WP(27)375. Robust optimisation in algorithmic investment strategies
This research develops a portfolio of four algorithmic strategies that produce Long/Short signals based on t+1 close price predictions of the underlying instrument. The main instrument used is S&P 500 index, and the data covers the period from 1990-01-01 to 2021-04-23. Each strategy is based on a different theory and aims to perform well in different market regimes. The objective is to have a set of uncorrelated investment strategies based on different logics such as trend-following, contrarian approach, statistical methods, and macro-economic news. Each strategy was individually generated following a personalized Walk-Forward optimisation, in which the model seeks to choose the most robust combination of parameters rather than the best one, in terms of risk-adjusted returns. The robustness of all strategies was tested by changing all parameters selected at the beginning of the optimisation. Additionally, the robustness of the portfolio of strategies is tested by applying it to another American index, Nasdaq Composite. Finally, the ensemble model was created based on the combination of the signals from all investment strategies for our two basis instruments. Results show that the portfolio obtains returns four (seven) times larger than the Buy & Hold strategy on S&P 500 (Nasdaq Composite) with a similar level of risk in the last 31 years.
Full TextRobert Ślepaczuk Sergio Castellano Gómez
C4 C14 C45 C53 C58 G13
algorithmic trading strategies robust optimisation criteria overoptimisation walk-forward optimisation ensemble investment model
WP(26)374. The Great Lockdown: information, noise and macroeconomic fluctuations
This paper argues that not only actual lockdowns can affect economies, but also noisy information about them. We construct a New Keynesian model with imperfect information about how long the lockdown would last. On the one hand, a false signal about the lockdown lowers consumption, investment, employment and output and this effect can be quantitatively sizable. On the other hand, a true information about a lockdown being introduced can also be misinterpreted and hence cause an impact on agents' decisions being quantitatively different from the one desired by the authorities. To the extent that the latter have less noisy information about future lockdowns than the private sector, they can reduce these undesired fluctuations by precisely communicating the lockdown policy. Importantly, under some circumstances only radical improvements in information precision are successful.
Full TextGrzegorz Wesołowski Michał Brzoza-Brzezina
D83 E32 E61 E65 I18 J21
Covid-19 lockdown imperfect information communication
WP(25)373. Applying Hybrid ARIMA-SGARCH in Algorithmic Investment Strategies on S&P500 Index
This research aims to compare the performance of ARIMA as linear model with that of the combination of ARIMA and GARCH family models to forecast S&P500 log returns in order to construct algorithmic investment strategies on this index. We use the data collected from Yahoo Finance. The dataset has daily frequency and covers the period from 01/01/2000 to 31/12/2019. By using rolling window approach, we compare ARIMA with the hybrid models to examine whether hybrid ARIMA-SGARCH and ARIMA-EGARCH can really reflect the specific time-series characteristics and have better predictive power than simple ARIMA model. In order to assess the precision and quality of these models in forecasting, we decide to compare their equity lines, their forecasting error metrics (MAE, MAPE, RMSE, MAPE) as well as their performance metrics (annualized return compounded, annualized standard deviation, maximum drawdown, information ratio and adjusted information ratio). The results show that the hybrid models outperform ARIMA and the benchmark (Buy&Hold strategy on S&P500 index). These results are not sensitive to varying window sizes, the type of distribution and the type of the GARCH model.
Full TextRobert Ślepaczuk Nguyen Vo
C4 C14 C45 C53 C58 G13
algorithmic investment strategies ARIMA ARIMA-SGARCH ARIMA-EGARCH hybrid model forecast stock returns model robustness
WP(24)372. Stability of the Representativeness Heuristic: Further Evidence from Choices Between Lottery Tickets
The representativeness heuristic (RH) proposes that people expect even a small sample to have similar characteristics to its parent population. One domain in which it appears to operate is the preference for combinations of numbers on lottery tickets: most players seem to avoid very characteristic, “unrepresentative” combinations, e.g., only containing very low numbers. Likewise, many players may avoid betting on a recently drawn combination because it would seem particularly improbable to be drawn again. We confirm both of these tendencies in a lab experiment and corroborate their external validity in two field experiments. However, we only find a weak link between these two choices: the same people do not necessarily exhibit the two biases. In this sense, there is little consistent manifestation of the RH across different tasks at the individual level. Nevertheless, there are some links related to rationality across the two choices – people who are willing to forgo a monetary payment to get the preferred ticket in one task are also willing to do it in the other. We find such preferences to be related to the misperception of probabilities and providing intuitive, incorrect answers in the Cognitive Reflection Test.
Full TextMichał Krawczyk Joanna Rachubik
C93 D01 D81 D91
Decision making under risk Lottery choice Perception of randomness Number preferences in lotteries Representativeness heuristic
WP(23)371. Application of machine learning in quantitative investment strategies on global stock markets
The thesis undertakes the subject of machine learning based quantitative investment strategies. Several technical analysis indicators were employed as inputs to machine learning models such as Neural Networks, K Nearest Neighbor, Regression Trees, Random Forests, Naïve Bayes classifiers, Bayesian Generalized Linear Models and Support Vector Machines. Models were used to generate trading signals on WIG20, DAX, S&P500 and selected CEE indices in the period between 2002-01-01 to 2020-10-30. Strategies were compared with each other and with the benchmark buy-and-hold strategy in terms of achieved levels of risk and return. Quality of estimation was evaluated on independent subsets and with the use of sensitivity analysis. The research results indicated that quantitative strategies generate better risk adjusted returns than passive strategies and that for the analysed indices predominantly Bayesian Generalized Linear Model and Naïve Bayes were the best performing models. More comprehensive rank approach based on the results for all analysed models and indices allowed to select Bayesian Generalized Linear Model as the model which on average generated the best results.
Full TextRobert Ślepaczuk Jan Grudniewicz
C4 C14 C45 C53 C58 G13
quantitative investment strategies machine learning neural networks regression trees random forests support vector machine technical analysis equity stock indices developed and emerging markets information ratio
WP(22)370. Predicting football outcomes from Spanish league using machine learning models
High-quality football predictive models can be very useful and profitable. Therefore, in this research, we undertook to construct machine learning models to predict football outcomes in games from Spanish LaLiga and then we compared them with historical forecasts extracted from bookmakers, which knowledge is commonly considered to be deep and high-quality. The aim of the paper was to design models with the highest possible predictive performances, get results close to bookmakers or even building better estimators. The work included detailed feature engineering based on previous achievements of this domain and own proposals. A built and selected set of variables was used with four machine learning methods, namely Random Forest, AdaBoost, XGBoost and CatBoost. The algorithms were compared based on: Area Under the Curve (AUC) and Ranked Probability Score (RPS). RPS was used as a benchmark in the comparison of estimated probabilities from trained models and forecasts from bookmakers' odds. For a deeper understanding and explanation of the demonstrated methods, which are considered as black-box approaches, Permutation Feature Importance (PFI) was used to evaluate the impacts of individual variables. Features extracted from bookmakers odds’ occurred the most important in terms of PFI. Furthermore, XGBoost achieved the best results on the validation set (RPS equals 0.1989), which obtained similar predictive power to bookmakers' odds (their RPS between 0.1977 and 0.1984). Results of the trained estimators were promising and this article showed that competition with bookmakers is possible using demonstrated techniques.
Full TextMarcin Chlebus Michał Lewandowski
C13 C51 C52 C53 C61 L83 Z29
predicting football outcomes machine learning betting adaboost random forest xgboost catboost ranked probability score auc permutation feature importance
WP(21)369. Determinants of residential real estate prices: Poland case study
Real estate market is important part of economy. Its influence is compounded by sectors related to real properties, such as construction, capital and rental markets. Real estate market specifics such as high capital intensity, rigidity of supply and time lags, make macroeconomic imbalance easy to emerge. Sudden changes of market conditions lead to severe consequences for the whole economy. Hence, house markets are important not only for house investors, real estate developers and financial institutions, but also for governments and central banks.
In this paper we propose to constitutes a comprehensive overview of house prices determinants. It can be perceived as residential real estate market guide, especially for polish market. In Poland house prices have been consistently increasing since 1990, when transition to a market economy had place. Statistical Pole gets wealthier from year to year, what in conjunction with a strong desire to be on his own, constitutes the powerful driving force of demand for residential real estates
The methods we used in this study are descriptive method and statistical models in addition scatter plots and Pearson's correlation coefficients between variables are presented. Empirical study are based on polish market data, published by Central Statistical Office and Centrum AMRON-SARFiN. We aggregated them for voivodeships and quarters, and concern 2010-2019 period.
Full TextKrzysztof Spirzewski Michał Ruszuk
E31 G21 K25 R31
real estate market price determinants mortgage
WP(20)368. Does the framing affect the WTP for consumption goods in realistic shopping settings?
In this study, I examined the influence of the framing effect on the valuation of consumption goods in realistic shopping settings. In four field experiments comprising 1602 shopping center customers as participants, I elicited willingness to pay (WTP) for consumer products by manipulating framing conditions (positive vs. negative framing). Although my four experiments involved two different types of products (durable vs. fast-moving), two different types of framing (attribute vs. goal) and two different valuation procedures (hypothetical vs. consequential), their results were remarkably consistent. I observed that the framing effect had no impact on WTP for the presented products. In the light of both this study and the existing literature, I suspect that the framing effect is more likely to appear in solely hypothetical judgement and assessment tasks than in the context of eliciting consumer WTP.
Full TextMagdalena Brzozowicz
D91 C93 M31
framing effect field experiment willingness to pay WTP
WP(19)367. Exchange traded funds: U.S. Financial Market case study
The presence of exchange traded funds (ETFs) is well recognise in many financial markets on the word since their inception in the 1990’s. The widespread interest in those are due to many advantages and confirms that it is worth investigating. The main advantages include its flexibility, and a large window of investment opportunities for retail investors and even become popular with institutional investors as well. However were introduced to the markets the main point was to provide cheaper access to passive indexed funds to individual investors. In this paper we provide the comprehensive knowledge on those funds in one particular market: the United States financial market. The aim of this research is to analyse and discuss the presence of ETFs in the financial markets. We analyse four ETFs and three indices, the results show that three out of the four ETFs are not cointegrated with the index that they track. On the other hand, correlation tests show that ETFs and its underlying index are highly correlated. The main methods of study used are integration orders of variables, cointegration vector and tests, error correction model – when it applies – all performed in R Studio. The data collection ranges between 2010 – 2019.
Full TextKrzysztof Spirzewski Aleksandra Tkaczow
stock exchange exchange traded funds U.S. financial market
WP(18)366. Enhanced Index Replication Based on Smart Beta and Tail-Risk Asset Allocation
The following research paper’s main goal is to create an algorithmically managed ETF, which tracks the SPX index and provides a Smart Beta exposure. Authors apply the following simple index replication methods: partial correlation, non-negative least squares, beta coefficient, and dynamic time warping. First, authors are trying to reverse engineer the Index Tracking process in an automated and fair manner - taking into account e.g. transaction costs. Additionally, authors apply a constraint to the total number of assets used in the replication process, which is limited to the certain N. Then, authors develop a Smart Beta framework based on limiting the negative tail-risk. The positive excess return (alpha) is captured and used to compensate for the underperformance of the replicated Index or paid in a form of a dividend. Moreover, with the enhancement methods applied (Kurtosis/Skewness and Excess Return Cushion (ERC) enhancements), the authors’ main goal is to keep the Tracking Error (TE) on a fixed level, although with a significant overweight on the Positive TE and underweight on the Negative TE. In the research paper, the data from 04-Jan-2016 to 31-Dec-2020 is used as the training window, while the first quarter of the year 2021 (Q1 2021) is used as an out-of-sample and out-of-time testing period. Additionally, the authors measure the replicated Index’s performance compared to the SPY, VOO, and IVV ETFs. Authors find a piece of empirical evidence that it is possible to track the SPX Index within the limits of 4-5% TE with the limited number of assets. Moreover, after the implementation of alpha accumulation, the authors outperform the benchmark ETFs in terms of minimizing the TE but did not succeed in providing statistically significant returns better than the SPX Index.
Full TextRobert Ślepaczuk Kamil Korzeń
C4 C14 C45 C53 C58 G13
exchange-traded funds enhanced index replication methods smart beta asset allocation partial correlation non-negative least squares dynamic time warping
WP(17)365. Trade-related effects of Brexit. Implications for Central and Eastern Europe
We use a global computable general equilibrium (CGE) model to analyze several scenarios of Brexit to assess it on the EU New Member States (NMS) to complement the literature exist. Our scenarios are based on expected outcomes of the negotiations, ie. the Soft Brexit with a limited FTA and a Hard Brexit governed by WTO MFN rules. The shocks imposed on the CGE model include modifications of both tariff and non-tariff barriers. While the former is based on actual tariff data, the latter are estimated using an econometric model for both merchandise trade and services. Our results show the macroeconomic effects of Brexit are mild with a slight decline of NMS GDP of roughly 0.4 % even in the case of a Hard Brexit. However, there are some sectors that may experience somewhat significant drops in output, in particular the food sector and some other manufacturing export-oriented sectors.
Full TextJan Hagemejer Jan Jakub Michałek Maria Dunin-Wąsowicz, Jacek Szyszka
F17 F10 F13
CGE modelling international trade Brexit trade policy
WP(16)364. Spatial Machine Learning – New Opportunities for Regional Science
This paper is a methodological guide on using machine learning in the spatial context. It provides an overview of the existing spatial toolbox proposed in the literature: unsupervised learning, which deals with clustering of spatial data and supervised learning, which displaces classical spatial econometrics. It shows the potential and traps of using this developing methodology. It catalogues and comments on the usage of spatial clustering methods (for locations and values, separately and jointly) for mapping, bootstrapping, cross-validation, GWR modelling, and density indicators. It shows details of spatial machine learning models, combined with spatial data integration, modelling, model fine-tuning and predictions, to deal with spatial autocorrelation and big data. The paper delineates "already available" and "forthcoming" methods and gives inspirations to transplant modern quantitative methods from other thematic areas to research in regional science.
Full TextKatarzyna Kopczewska
C31 R10 C49
spatial machine learning clustering spatial covariates spatial cross-validation spatial autocorrelation
WP(15)363. Re-meander, rewet, rewild! Overwhelming public support for restoration of small rivers in the three Baltic Sea basin countries
Baltic Sea belongs to World's most oxygen-depletes seas, so the region requires urgent mitigation measures to significantly reduce nitrogen and phosphorus inputs from land through rivers, which cannot be achieved without large-scale restoration of wetland buffer zones. The manuscript summarises the findings of the discrete choice experiment aimed at assessment of the preferences of Danish, German, and Polish citizens towards ecosystem services of lowland small rivers of the Baltic Sea basin. Our results suggest that respondents in all the studied countries are willing to pay substantial amounts to improve water quality in rivers and the Baltic Sea, as well as to restore naturally meandering rivers and natural riparian vegetation. Wild marshes and Wetland agriculture were equally valued as the most desirable options. Respondents systematically cared about the appearance of small rivers in their neighbourhood. We conclude that re-meandering, re-wetting of floodplains, and restoration of wild marshes or development of wetland agriculture could gain a lot of public support in Europe.
Full TextMarek Giergiczny Sviataslau Valasiuk Wiktor Kotowski, Halina Galera, Jette Bredahl Jacobsen, Julian Sagebiel, Wendelin Wichtmann, Ewa Jabłońska
Baltic Sea discrete choice experiment ecosystem services restoration small rivers willingness to pay
WP(14)362. Privacy trade-offs in the ride-hailing services
We test for users readiness for co-financing ride-hailing service with their personal data applying a Discrete Choice Experiment. We design an experiment in which respondents are asked to choose between hypothetical app-based taxi rides which offered discounts as a compensation for intruding their privacy and a regular service. Our analysis compare how awareness of rights stemming from GDPR affects respondent's privacy preferences. Cross-group analysis indicate that reminding users about their rights stemming from the GDPR significantly increases their valuation of personal data. The results of WTA analysis suggest that there is a market for "pay with your data" business models.
Full TextMichał Paliński
C25 D12 L51
economics of privacy mobile apps DCE mixed logit WTA
WP(13)361. Machine learning in the prediction of flat horse racing results in Poland
Horse racing was the source of many researchers considerations who studied market efficiency and applied complex mathematic formulas to predict their results. We were the first who compared the selected machine learning methods to create a profitable betting strategy for two common bets, Win and Quinella. The six classification algorithms under the different betting scenarios were used, namely Classification and Regression Tree (CART), Generalized Linear Model (Glmnet), Extreme Gradient Boosting (XGBoost), Random Forest (RF), Neural Network (NN) and Linear Discriminant Analysis (LDA). Additionally, the Variable Importance was applied to determine the leading horse racing factors. The data were collected from the flat racetracks in Poland from 2011-2020 and described 3,782 Arabian and Thoroughbred races in total. We managed to profit under specific circumstances and get a correct bets ratio of 41% for the Win bet and over 36% for the Quinella bet using LDA and Neural Networks. The results demonstrated that it was possible to bet effectively using the chosen methods and indicated a possible market inefficiency.
Full TextMarcin Chlebus Piotr Borowski
C53 C55 C45
horse racing prediction racetrack betting Thoroughbred and Arabian flat racing machine learning Variable Importance
WP(12)360. Don’t Worry, Be Happy – But Only Seasonally
Current scientific knowledge allows us to assess the impact of socioeconomic variables on musical preferences. The research methods in these studies were psychological experiments and surveys conducted on small groups or analyzing the influence of only one or two variables at the level of the whole society. Instead inspired by the article of The Economist about February being the gloomiest month in terms of music listened to, we have created a dataset with many different variables that will allow us to create more reliable models than the previous datasets. We used the Spotify API to create a monthly dataset with average valence for 26 countries for the period from January 1, 2018, to December 1, 2019. Our study almost fully confirmed the effects of summer, December, and number of Saturdays in a month and contradicted the February effect. In the context of the index of freedom and diversity, the models do not show much consistency. The influence of GDP per capita on the valence was confirmed, while the impact of the happiness index was disproved. All models partially confirmed the influence of the music genre on the valence. Among the weather variables, two models confirmed the significance of the temperature variable. All in all, effects analyzed by us can broaden artists' knowledge of when to release new songs or support recommendation engines for streaming services.
Full TextMateusz Kijewski, Szymon Lis, Michał Woźniak, Maciej Wysocki
C01 C23 I31
valence spotify happiness statistical panel analysis explainable machine learning
WP(11)359. Comparison of the accuracy in VaR forecasting for commodities using different methods of combining forecasts
No model dominates existing VaR forecasting comparisons. This problem may be solved by combine forecasts. This study investigates the daily volatility forecasting for commodities (gold, silver, oil, gas, copper) from 2000-2020 and identifies the source of performance improvements between individual GARCH models and combining forecasts methods (mean, the lowest, the highest, CQOM, quantile regression with the elastic net or LASSO regularization, random forests, gradient boosting, neural network) through the MCS. Results indicate that individual models achieve more accurate VaR forecasts for the confidence level of 0.975, but combined forecasts are more precise for 0.99. In most cases simple combining methods (mean or the lowest VaR) are the best. Such evidence demonstrates that combining forecasts is important to get better results from the existing models. The study shows that combining the forecasts allows for more accurate VaR forecasting, although it's difficult to find accurate, complex methods.
Full TextMarcin Chlebus Szymon Lis
C51 C52 C53 G32 Q01
Combining forecasts Econometric models Finance Financial markets GARCH models Neural networks Regression Time series Risk Value-at-Risk Machine learning Model Confidence Set
WP(10)358. HCR & HCR-GARCH – novel statistical learning models for Value at Risk estimation
Market risk researchers agree that an ideal model for Value at Risk (VaR) estimation does not exist, different models performance strongly depends on current economic circumstances. Under the conditions of sudden volatility increase, such as during the global economic crisis caused by the Covid-19 pandemic, no classical VaR model worked properly even for the group of the largest market indices. Therefore, the aim of the article is to present and formally test three novel statistical learning models for VaR estimation: HCR, HCR-GARCH and HCR-QML-GARCH, which, by considering additional volatility term (due to time context and statistical moments), should be able to perform well in times of market turbulence. In the benchmark procedure we compare the 1% and 2.5% one-day-ahead VaR forecasts obtained with the above models against the estimates of classical methods like: Historical Simulation, KDE, Modified Cornish-Fisher Expansion, GARCH(1,1) with varied distributions, RiskMetrics™, EVT and QML-GARCH. Four periods that vary in terms of market volatility: 2006-9, 2008-11, 2014-17 and mid-2016 to mid-2020 for six different stock market indexes: DAX, WIG 20, MOEX, S&P 500, Nikkei and SHC are selected. Models quality is tested from two perspectives: fulfilling regulatory requirements and forecasting adequateness. Obtained results show that HCR-GARCH outperforms other models during periods of sudden increased volatility in the markets. At the same time, HCR-QML-GARCH liberalizes the conservative estimates of HCR-GARCH and allows its use under moderate volatility, without any major loss of quality in times of crisis.
Full TextMarcin Chlebus Michał Woźniak
G32 C52 C53 C58
Value at Risk Hierarchical Correlation Reconstruction GARCH Standardized Residuals
WP(9)357. Are Transboundary Nature Protected Areas International Public Goods and Why People Think They Are (Not)? Hybrid Modelling Evidence from the EU Outer Borders
Former studies have shown that transboundary nature protected areas are not perceived as pure international public goods by citizens in neighbouring countries that share national parks. In this study, we assess what drives the valuation of nature protection on the other side of the border in two European transboundary nature areas, the Białowieża Forest and Fulufjället. Applying hybrid choice modelling, we account for people’s attitudes when eliciting their preferences towards transboundary nature protected areas, and examine the impact of attitudes on the degree to which those preferences are consistent with the international public good hypothesis. We found that the intention of visiting the foreign part of the transboundary area, appreciation of transboundary justice and altruism, were the main drivers, whereas suspicious attitude towards the neighbouring country, propensity to free-ride, and manifestations of ‘patriotism’ applied as international public good mitigators to a limited degree only. Value of an extending the protection regime abroad was still positive for Scandinavians, whilst for Polish and Belarusian respondents a policy aiming at extending the protection abroad would lead to loss of human welfare. Facilitating visits of the foreign part by enhancing cross-border access can be expected to shift peoples’ preferences towards transboundary co-operation.
Full TextSviataslau Valasiuk Mikołaj Czajkowski Marek Giergiczny Tomasz Żylicz Knut Veisten, Iratxe Landa Mata, Askill Harkjerr Halse, Per Angelstam
Q51 Q57 H41
International public goods national parks forest transboundary nature protected areas public preferences willingness to pay discrete choice experiment hybrid modeling
WP(8)356. GARCHNet - Value-at-Risk forecasting with novel approach to GARCH models based on neural networks
This study proposes a new GARCH specification, adapting a long short-term memory (LSTM) neural network's architecture. Classical GARCH models have been proven to give substantially good results in the case of financial modeling, where high volatility can be observed. In particular, their high value is often praised in the case of Value-at-Risk. However, the lack of nonlinear structure in most of the approaches entails that the conditional variance is not represented in the model well enough. On the contrary, recent rapid advancement of deep learning methods is said to be capable of describing any nonlinear relationships prominently. We suggest GARCHNet - a nonlinear approach to conditional variance that combines LSTM neural networks with maximum likelihood estimators of probability in GARCH. The distributions of the innovations considered in the paper are: normal, t and skewed t, however the approach does enable extensions to other distributions as well. To evaluate our model, we have executed an empirical study on the log returns of WIG 20 (Warsaw Stock Exchange Index) in four different time periods throughout 2005 and 2021 with varying levels of observed volatility. Our findings confirm the validity of the solution, however we present several directions to develop it further.
Full TextMateusz Buczyński Marcin Chlebus
G32 C52 C53 C58
Value-at-Risk GARCH neural networks LSTM
WP(7)355. Persuasive messages will not raise COVID-19 vaccine acceptance. Evidence from a nation-wide online experiment
Although mass vaccination is the best way out of the pandemic, the share of sceptics is very substantial in most countries. Social campaigns can emphasize the many arguments that potentially raise acceptance for vaccines: e.g., that they have been developed, tested, and recommended by doctors and scientists; that they are safe, effective and in demand. We verified the effectiveness of such messages in an online experiment conducted in February and March 2021 with a sample of almost six thousand adult Poles, which was nationally representative in terms of key demographic variables. We presented responders with different sets of information about vaccination against COVID-19. After reading the information bundle, they indicated whether they would be willing to be vaccinated. We also asked them to justify their answers and indicate who or what might change their opinion. Finally, we elicited a number of individual characteristics and opinions. We found that nearly 45% of the responders were unwilling to be vaccinated and none of the popular messages we used was effective in reducing this hesitancy. We also observed a number of significant correlates of vaccination attitudes, with men, older, richer, and non-religious individuals, those with higher education, trusting science rather than COVID-19 conspiracy theories being more willing to be vaccinated. We discuss important consequences for campaigns aimed at reducing COVID-19 vaccine hesitancy.
Full TextRaman Kachurka Michał Krawczyk Joanna Rachubik
COVID-19 vaccine refusal vaccination hesitancy persuasion
WP(6)354. Institutional Framework of Central Bank Independence: Revisited
The subject of central bank independence (CBI) and its consequences for monetary policy and economic development has been widely explored in public debate and research discourse. The main aim of the article is to analyze central bank independence, considering the institutional environment in a given country. Our primary focus is on the relevance of de jure provisions for de facto CBI, as well as on the importance of other structural factors. We rely on a dataset consisting of various novel indices to approximate these issues across multiple dimensions and apply advanced econometric tools to investigate our research tasks. The outcome of the study implies that the interrelationships between de jure and de facto CBI are observable. Thus, these conclusions may be successfully applied in institutional design and public policies regarding central banking.
Full TextJacek Lewkowicz Michał Woźniak, Michał Wrzesiński
E50 E58 K20 P48
central bank independence uncertainty political economy law & economics institutional economics
WP(5)353. The Application of Machine Learning Algorithms for Spatial Analysis: Predicting of Real Estate Prices in Warsaw
The principal aim of this paper is to investigate the potential of machine learning algorithms in context of predicting housing prices. The most important issue in modelling spatial data is to consider spatial heterogeneity that can bias obtained results when is not taken into consideration. The purpose of this research is to compare prediction power of such methods: linear regression, artificial neural network, random forest, extreme gradient boosting and spatial error model. The evaluation was conducted using train, validation, test and k-Fold Cross-Validation methods. We also examined the ability of the above models to identify spatial dependencies, by calculating Moran's I for residuals obtained on in-sample and out-of-sample data.
Full TextDawid Siwicki
C31 C45 C52 C53 C55 R31
spatial analysis machine learning housing market random forest gradient boosting
WP(4)352. Concept of peer-to-peer lending and application of machine learning in credit scoring
Numerous applications of AI are found in the banking sector. Starting from front-office, enhancing customer recognition and personalized services, continuing in middle-office with automated fraud-detection systems, ending with back-office and internal processes automatization. In this paper we provide comprehensive information on the phenomenon of peer-to-peer lending in the modern view of alternative finance and crowdfunding from several perspectives. The aim of this research is to explore the phenomenon of peer-to-peer lending market model. We apply and check the suitability and effectiveness of credit scorecards in the marketplace lending along with determining the appropriate cut-off point.
We conducted this research by exploring recent studies and open-source data on marketplace lending. The scorecard development is based on the P2P loans open dataset that contains repayments record along with both hard and soft features of each loan. The quantitative part consists of applying a machine learning algorithm in building a credit scorecard, namely logistic regression.
Full TextKrzysztof Spirzewski Aleksy Klimowicz, Krzysztof Spirzewski
artificial intelligence peer-to-peer lending credit risk assessment credit scorecards logistic regression machine learning
WP(3)351. Intergenerational redistributive effects of monetary policy
This paper investigates the distributional consequences of monetary policy across generations. We use a life-cycle model with a rich asset structure as well as nominal and real rigidities calibrated to the euro area using both macroeconomic aggregates and microeconomic evidence from the Household Finance and Consumption Survey. We show that the life-cycle profiles of income and asset accumulation decisions are important determinants of redistributive effects of monetary shocks and ignoring them can lead to highly misleading conclusions. The redistribution is mainly driven by nominal assets and labor income, less by real and housing assets. Overall, we find that a typical monetary policy easing redistributes welfare from older to younger generations.
Full TextMarcin Bielecki Michał Brzoza-Brzezina, Marcin Kolasa
E31 E52 J11
monetary policy life-cycle models wealth redistribution
WP(2)350. The effects of child benefit on household saving
In 2016, a new child benefit was introduced in Poland: a universal benefit for the second and subsequent children in a family and means tested for the first child. Substantial transfers of the new child benefit were granted 60% of households with children. The generous child benefit, equal to 10% of monthly median households' income, caused an unexpected positive income shock for families with children. In this paper, we investigate how the new child benefit affects the household decisions to consume or save the child's income. Applying the difference-in-differences method and Polish Household Budget Survey data for the years 2012-2018, we find a positive effect of the child benefit on household saving. Our estimates indicate that families obtaining the child benefit (treatment group) increased the saving rate by 8 percentage points after the child benefit reform in 2016. Over time, the control group (not obtaining the child benefit) raised the saving rate by 2.9 percentage points.
Full TextZofia Barbara Liberda Katarzyna Sałach Marek Pęczkowski
D14 G51 I38 P36
households income child benefit saving
WP(1)349. Causes of the spatially uneven outflow of Warsaw inhabitants to the city’s suburbs: an economic analysis of the problem
In this article I provide a quantitative analysis of suburban migration patterns in Warsaw, Poland. Basing this analysis on the extended gravity model of migration, an econometric panel model was built to identify key pulling factors for migrants who move from Warsaw to its suburbs. The role of residential lot prices and the resulting possible endogeneity are also discussed. It was confirmed that migrants choose boroughs of greater population density that have higher average relative income and more amenities, but at a smaller distance to Warsaw’s city center and with lower residential lot prices relative to those in Warsaw.
Full TextHonorata Bogusz
R23 P25 C23 C51
gravity model of migration suburbanization Mundlak terms Correlated Random Effects