WP(39)345. Applying Hurst Exponent in Pair Trading Strategies

Abstract

This research aims to seek an alternative approach to pair selection for the purpose of pair trading strategy. We try to build an effective pair trading strategy based on 103 stocks listed in NASDAQ-100 index. The dataset has daily frequency and covers the period from 01/01/2000 to 31/12/2018. In this study, Generalized Hurst Exponent, Correlation and Cointegration are employed to detect the mean-reverting pattern in the time series of linear combination of each pair of stock. The result show that Hurst method cannot outperform the benchmark, which implies that market is efficient. This result is quite sensitive to varying number of pairs traded and rebalancing period and less sensitive to financial leverage degree. Moreover, Hurst method is better than cointegration method but is not superior as compared to correlation method.
Robert Ślepaczuk Quynh Bui

WP(38)344. We have just explained real convergence factors using machine learning

Abstract

There are several competing empirical approaches to identify factors of real economic convergence. However, all of the previous studies of cross-country convergence assume a linear model specification. This article uses a novel approach and shows the application of several machine learning tools to this topic discussing their advantages over the other methods, including possibility of identifying nonlinear relationships without any a priori assumptions about its shape. The results suggest that conditional convergence observed in earlier studies could have been a result of inappropriate model specification. We find that in a correct non-linear approach, initial GDP is not (strongly) correlated with growth. In addition, the tools of interpretable machine learning allow to discover the shape of relationship between the average growth and initial GDP. Based on these tools we prove the occurrence of convergence of clubs.
Piotr Wójcik Bartłomiej Wieczorek

WP(37)343. Predicting well-being based on features visible from space – the case of Warsaw

Abstract

In recent years, availability of satellite imagery has grown rapidly. In addition, deep neural networks gained popularity and become widely used in various applications. This article focuses on using innovative deep learning and machine learning methods with combination of data that is describing objects visible from space. High resolution daytime satellite images are used to extract features for particular areas with the use of transfer learning and convolutional neural networks. Then extracted features are used in machine learning models (LASSO and random forest) as predictors of various socio-economic indicators. The analysis is performed on a local level of Warsaw districts. The findings from such approach can be a great help to get almost continuous measurement of the economic well-being, independently of statistical offices.
Piotr Wójcik Krystian Andruszek

WP(36)342. Home advantage revisited. Did COVID level the playing fields?

Abstract

The COVID-19 pandemic swept fans out of the stadiums, but matches continued to be played in most major leagues. We make use of this natural experiment to investigate if home-field advantage disappears when the home team is not supported by the audience. Focusing on four top European soccer leagues, we find such an effect in the Bundesliga only. We propose this singularity may be related to the special role that the fan associations play in German football.
Michał Krawczyk Paweł Strawiński

WP(35)341. The impact of the results of football matches on the stock prices of soccer clubs

Abstract

The aim of this paper is to study the relationship between sport results and stock prices of European football clubs. To show that connection, we use two econometric models. Firstly, we conduct an event study analysis around the dates of football games to look for existence of abnormal returns. Secondly, we use OLS regression to test what effect the unexpected part of the result has. Based on 2239 observations of football matches results played between 01/08/2016 and 02/03/2020, we find significant relationship between sport results and financial performance. Significant negative abnormal returns are observed around defeats and draws, while for wins the impact is unclear. Using second model, we find positive values for coefficients related to unexpected number of points, which can be an additional evidence of a link between football results and stock prices. Finally, we see the potential for systematic trading strategy on soccer stocks based on the presented results. Such algo strategy with market neutral characteristic should beat the market regardless of market conditions.
Robert Ślepaczuk Igor Wabik

WP(34)340. What factors determine unequal suburbanisation? New evidence from Warsaw, Poland

Abstract

This article investigates the causes of spatially uneven migration from Warsaw to its suburban boroughs. The method is based on the gravity model of migration extended by additional measures of possible pulling factors. We report a novel approach to modelling suburbanisation: several linear and non-linear predictive models are estimated and explainable AI methods are used to interpret the shape of relationships between the dependent variable and the most important regressors. It is confirmed that migrants choose boroughs of better amenities and of smaller distance to Warsaw city center.
Piotr Wójcik Honorata Bogusz, Szymon Winnicki

WP(33)339. The impact of the content of Federal Open Market Committee post-meeting statements on financial markets – text mining approach

Abstract

This article examines the impact of FOMC statements on stock and foreign exchange markets with the use of text mining and modelling methods including linear and non-linear algorithms. Proposed methodology is based on calculating the FOMC statements' tone called as sentiment and incorporate it as a potential predictor in the modelling process. Additionally, we incorporate the market surprise component as well as two financial indicators namely Purchasing Managers' Index and Consumer confidence index that gauge for corporate managers and retail customers assessment of the economic situation and potential fluctuations. Eight event windows around the event are considered: 60-minute and 20-minute windows before the event and also 15-minute, 20-minute, 25-minute, 30-minute, 60-minute and 120-minute windows after the event. Research has shown that given linear models the sentiment of FOMC statements does not generate a significant response in any of the analyzed event windows neither for the S&P 500 Index nor for the spot price on the EUR/USD currency pair. However, significant predictors occurred to be market shock in case of both S&P 500 Index and EUR/USD spot price, PMI in case of EUR/USD spot price and also CCI in case of EUR/USD spot price. Given non-linear models, the negative relation of statement's sentiment score and the model prediction is observed for EUR/USD spot price.
Piotr Wójcik Ewelina Osowska

WP(32)338. Fractional differentiation and its use in machine learning

Abstract

This article covers the implementation of fractional (non-integer order) differentiation on four datasets based on stock prices of main international stock indexes: WIG 20, S&P 500, DAX, Nikkei 225. This concept has been proposed by Lopez de Prado to find the most appropriate balance between zero differentiation and fully differentiated time series. The aim is making time series stationary while keeping its memory and predictive power. This paper makes also the comparison between fractional and classical differentiation in terms of the effectiveness of artificial neural networks. This comparison is done in two viewpoints: Root Mean Square Error (RMSE) and Mean Absolute Error (MAE). The conclusion of the study is that fractionally differentiated time series performed better in trained ANN.
Janusz Gajda Rafał Walasek

WP(31)337. Variance Gamma Model in Hedging Vanilla and Exotic Options

Abstract

The aim of this research is to explore the performance of different option pricing models in hedging the exotic options using the FX data. We analyze the narrow class of Lévy processes - the Variance Gamma process in hedging vanilla, Asian and lookback options. We pose a question of whether or not using additional level of complexity, by introducing more sophisticated models, improves the effectiveness of hedging options, assuming that hedging errors are measured as the differences between portfolio values according to the model and not real market data (which we don't have). We compare this model with its special case and the Black-Scholes model. We use the data for EURUSD currency pair assuming that option prices change according to the model (as we don't observe them directly). We use Monte Carlo methods in fitting the model's parameters. Our results are not in line with the previous literature as there are no signs of the Variance Gamma process being better than the Black-Scholes and it seems that all three models perform equally well.
Robert Ślepaczuk Bartłomiej Bollin

WP(30)336. Impact of using industry benchmark financial ratios on performance of bankruptcy prediction logistic regression model

Abstract

The phenomenon of companies bankruptcy is crucial for business partners and financial institutions due to the fact that business failure might be the cause of huge losses. Researchers has continually been aimed for improving models performance in the prediction of companies bankruptcy. Some authors of scientific papers claim that the process of evaluation of the companies situation requires comparison of its characteristics defined as financial ratio with situation of whole sector in order to obtain reliable conclusions. In this paper, a hypothesis that usage of the industry benchmarks (transformation of raw financial ratios values into sectoral deciles groups numbers) improves results of bankruptcy prediction logistic regression model is verified. Based on empirical results for Polish market, it turns out that although models estimated on different types of data have similar discriminatory power, logistic regression using raw financial ratios obtained a bit better results than its industry equivalents defined as sectoral deciles groups numbers. It is worth emphasizing that empirical part of paper uses information about 109K companies what is the rarity in bankruptcy prediction papers – researchers usually use small datasets that include less than several hundred records.
Mateusz Heba Marcin Chlebus

WP(29)335. Lottery "strategies": monetizing players' behavioral biases

Abstract

The popularity of lotteries around the world is puzzling. In this paper, we study one factor, which might contribute to this phenomenon, namely lottery "strategies" that could allegedly improve players' odds. In an online survey of lottery players we find that such strategies are popular and their use is related to more frequent lottery play and a number of personality traits and beliefs about gambling. Systematically searching for websites and books, we amass the largest dataset of lottery strategies in existence. We subsequently analyze their descriptions, categorize them, and investigate how they exploit their target audience's behavioral biases, including the illusion of control, authority bias, magical thinking, the illusion of correlation, gambler's fallacy, hot hand fallacy, representativeness heuristic, availability heuristic, and regret aversion. We find that the strategies maintain gamblers' (false) beliefs about the possibility of controlling lottery results. This exploratory work contributes to a deeper understanding of (problem) gambling and lays the foundation for the design of experiments testing how the specific features of different strategies may interact with beliefs and trigger (excessive) lottery play.
Raman Kachurka Michał Krawczyk

WP(28)334. Value-at-risk — the comparison of state-of-the-art models on various assets

Abstract

This paper compares different approaches to Value-at-Risk measurement based on parametric and non-parametric approaches. Three portfolios are taken into consideration — the first one containing only stocks from the London Stock Exchange, the second one based on different assets of various origins and the third one consisting of cryptocurrencies. Data used cover the period of more than 20y. In the empirical part of the study, parametric methods based on mean-variance framework are compared with GARCH(1,1) and EGARCH(1,1) models. Different assumptions concerning returns' distribution are taken into consideration. Adjustment for the fat tails effect is made by using Student t distribution in the analysis. One-day-ahead 95%VaR estimation is then calculated. Thereafter, models are validated using Kupiec and Christoffersen tests and Monte Carlo Simulation for reliable verification of the hypotheses. The overall goal of this paper is to establish if analyzed models accurately estimate Value-at-Risk measure, especially if we take into account assets with various returns distribution characteristics.
Robert Ślepaczuk Karol Kielak

WP(27)333. Predicting prices of S&P500 index using classical methods and recurrent neural networks

Abstract

This study implements algorithmic investment strategies with buy/sell signals based on classical methods and recurrent neural network model (LSTM). The research compares the performance of investment algorithms on time series of S&P500 index covering 20 years of data from 2000 to 2020. This paper presents an approach for dynamic optimization of parameters during backtesting process by using rolling training-testing window. Every method was tested in terms of robustness to changes in parameters and evaluated by appropriate performance statistics e.g. Information Ratio, Maximum Drawdown, etc. Combination of signals from different methods was stable and outperformed benchmark of Buy & Hold strategy doubling its returns on the same level of risk. Detailed sensitivity analysis revealed that classical methods which used rolling training-testing window were significantly more robust to changes in parameters than LSTM model in which hyperparameters were selected heuristically.
Robert Ślepaczuk Mateusz Kijewski

WP(26)332. Predicting uptake of a malignant catarrhal fever vaccine by pastoralists in northern Tanzania: opportunities for improving livelihoods and ecosystem health

Abstract

Malignant Catarhal Fever (MCF), transmitted from wildebeest to cattle, threatens livestock-based livelihoods and food security in many areas of Africa. Many herd owners reduce transmission risks by moving cattle away from infection hot-spots, but this imposes considerable economic burdens on their households. The advent of a partially-protective vaccine for cattle opens up new options for disease prevention. In a study of pastoral households in northern Tanzania, we use stated preference choice modelling to investigate how pastoralists would likely respond to the availability of such a vaccine. We show a high probability of likely vaccine uptake by herd owners, declining at higher vaccine costs. Acceptance increases with more efficaceous vaccines, in situations where vaccinated cattle are ear-tagged, and where vaccine is delivered through private vets. Through analysis Normalized Density Vegetation Index (NDVI) data, we show that the reported MCF incidence over 5 years is highest in areas with greatest NDVI variability and in smaller herds. Trends towards greater rainfall variability suggest that MCF avoidance through traditional movement of cattle away from wildebeest will become more challenging and that demand for an MCF vaccine will likely increase.
Mikołaj Czajkowski Catherine Decker, Nick Hanley, Thomas A. Morrison, Julius Keyyu, Linus Munishi, Felix Lankester, Sarah Cleaveland

WP(25)331. Energy demand management and social norms – the case study in Poland

Abstract

The study aims to investigate the impact of social norms and the financial motivation on the disutility of Polish households from energy management. We analyzed consumers’ preferences for the new Demand-Side Management (DSM) programs. We applied a choice experiment (CE) framework for various electricity contracts that implied external control of electricity usage. Based on the hybrid model, we proved that people with higher descriptive social norms about electricity consumption are less sensitive to the level of compensation and more responsive to the number of blackouts. People who stated they would sign the contract because of the financial reasons are less sensitive to the external control of electricity consumption. They are less inclined towards the status quo option. Poland’s energy policy focuses on energy efficiency, and reduction of greenhouse gas emissions. This study may contribute to understanding the decisions of households and provide insights into the DSM option in Poland.
Bernadeta Gołębiowska Anna Bartczak Mikołaj Czajkowski

WP(24)330. Demographics and the natural interest rate in the euro area

Abstract

We investigate the impact of demographics on the natural rate of interest (NRI) in the euro area, with a particular focus on the role played by economic openness, migrations and pension system design. To this end, we construct a life-cycle model and calibrate it to match the life-cycle profiles from HFCS data. We show that population aging contributes significantly to the decline in the NRI, explaining about two-thirds of its secular decline between 1985 and 2030. Openness to international capital flows has not been important in driving the EA real interest rate so far, but will become a significant factor preventing its further decline in the coming decades, when aging in Europe accelerates relative to the rest of the world. Of two possible pension reforms, only an increase in the retirement age can revert the downward trend on the equilibrium interest rate while a fall in the replacement rate would make its fall even deeper. The demographic pressure on the Eurozone NRI can be alleviated by increased immigration, but only to a small extent and with a substantial lag.
Marcin Bielecki Michał Brzoza-Brzezina, Marcin Kolasa

WP(23)329. Are Poles stuck in overeducation? Individual dynamics of educational mismatch in Poland

Abstract

The paper investigates persistency of overeducation from individual perspective. Following aspects of mobility are analysed: probability of staying in employment, upward occupational mobility and wage dynamics. Data for Poland are used. The results show that overeducated individuals are more likely to stay in employment compared to their properly matched colleagues. The overeducated workers as well as undereducated ones tend to move toward jobs for which they are more properly matched. However, the rate of this adjustment is low and one can fairly claim that in Poland overeducation is a persistent phenomenon from individual perspective. In line with other studies, the overeducated workers are found to experience faster wage growth compared to properly matched individuals. However, it can be largely attributed to overeducated workers improving their match status over time. It means that initially overeducated workers can expect faster wage growth than properly matched workers especially when they move to jobs requiring more schooling.
Jan Aleksander Baran

WP(22)328. Nvidia’s stock returns prediction using machine learning techniques for time series forecasting problem

Abstract

The main aim of this paper was to predict daily stock returns of Nvidia Corporation company quoted on Nasdaq Stock Market. The most important problems in this research are: statistical specificity of return ratios i.e. time series might occur to be a white noise and the fact of necessity of applying many atypical machine learning methods to handle time factor influence. The period of study covered 07/2012 - 12/2018. Models used in this paper were: SVR, KNN, XGBoost, LightGBM, LSTM, ARIMA, ARIMAX. Features which, were used in models comes from such classes like: technical analysis, fundamental analysis, Google Trends entries, markets related to Nvidia. It was empirically proved that there is a possibility to construct prediction model of Nvidia daily return ratios which can outperform simple naive model. The best performance was obtained by SVR based on stationary attributes. Generally, it was shown that models based on stationary variables perform better than models based on stationary and non-stationary variables. Ensemble approach designed especially for time series failed to make an improvement in forecast precision. It seems that usage of machine learning models for the problem of time series with various explanatory variable classes brings good results.
Marcin Chlebus Michał Dyczko, Michał Woźniak

WP(21)327. HRP performance comparison in portfolio optimization under various codependence and distance metrics

Abstract

Problem of portfolio optimization was formulated almost 70 years ago in the works of Harry Markowitz. However, the studies of possible optimization methods are still being provided in order to obtain better results of asset allocation using the empirical approximations of codependences between assets. In this work various codependences and metrics are tested in the Hierarchical Risk Parity algorithm to determine whether the results obtained are superior to those of the standard Pearson correlation as a measure of codependence. In order to compare how HRP uses the information from alternative codependence metrics, the MV, IVP, and CLA optimization algorithms were used on the same data. Dataset used for comparison consisted of 32 ETFs representing equity of different regions and sectors as well as bonds and commodities. The time period tested was 01.01.2007-20.12.2019. Results show that alternative codependence metrics show worse results in terms of Sharpe ratios and maximum drawdowns in comparison
to the standard Pearson correlation for each optimization method used. The added value of this work is using alternative codependence and distance metrics on real data, and including transaction costs to determine their impact on the result of each algorithm.
Marcin Chlebus Illya Barziy

WP(20)326. Ex-ante and ex-post measures to mitigate hypothetical bias. Are they alternative or complementary tools to increase the reliability and validity of DCE estimates?

Abstract

Hypothetical bias remains at the heart of controversy about the reliability and validity of value estimates from discrete choice experiments (DCEs). This especially applies to environmental valuation, where typically no market value exists for the good under study. This has motivated a large body of literature that investigates possible approaches to test for and mitigate this bias. Our study provides further evidence to this debate by testing whether the use of ex-ante or ex-post mitigation strategies is effective in reducing hypothetical bias in DCEs. Specifically, we use individual and multiple ex-ante reminders along with ex-post data analysis to test whether their individual or joint use improves the quality of the willingness to pay (WTP) estimates. The analysis is carried out with the use of the state-of-the-art mixed logit model, as well as using the innovative semi-parametric logit-mixed-logit, which can capture non-standard heterogeneity distributions. The empirical study focuses on preferences for environmental and social impacts of organic olive production. Comparing three experimental treatments to a control treatment, we test whether cheap talk addressing hypothetical bias, a scale reminder or a combination of both affect stated WTP. In addition, we use ex-post data analysis aimed at correcting WTP estimates. Results show that cheap talk scripts and scale reminders, alone or in conjunction, did not significantly influence the results obtained from a sub-sample with standard budget constraint reminders. The ex-post approach outperforms the ex-ante approach and provides a significant reduction on mean WTP estimates.
Wiktor Budziński Mikołaj Czajkowski Sergio Colombo, Klaus Glenk

WP(19)325. Artificial Neural Networks Performance in WIG20 Index Options Pricing

Abstract

In this paper the performance of artificial neural networks in option pricing is analyzed and compared with the results obtained from the Black – Scholes – Merton model based on the historical volatility. The results are compared based on various error metrics calculated separately between three moneyness ratios. The market data-driven approach is taken in order to train and test the neural network on the real-world data from the Warsaw Stock Exchange. The artificial neural network does not provide more accurate option prices. The Black – Scholes – Merton model turned out to be more precise and robust to various market conditions. In addition, the bias of the forecasts obtained from the neural network differs significantly between moneyness states.
Robert Ślepaczuk Maciej Wysocki

WP(18)324. Towards better understanding of complex machine learning models using Explainable Artificial Intelligence (XAI) - case of Credit Scoring modelling

Abstract

recent years many scientific journals have widely explored the topic of machine learning interpretability. It is important as application of Artificial Intelligence is growing rapidly and its excellent performance is of huge potential for many. There is also need for overcoming the barriers faced by analysts implementing intelligent systems. The biggest one relates to the problem of explaining why the model made a certain prediction. This work brings the topic of methods for understanding a black-box from both the global and local perspective. Numerous agnostic methods aimed at interpreting black-box model behavior and predictions generated by these complex structures are analyzed. Among them are: Permutation Feature Importance, Partial Dependence Plot, Individual Conditional Expectation Curve, Accumulated Local Effects, techniques approximating predictions of the black-box for single observations with surrogate models (interpretable white-boxes) and Shapley values framework. Our prospect leads toward the question to what extent presented tools enhance model transparency. All of the frameworks are examined in practice with a credit default data use case. The overview presented prove that each of the method has some limitations, but overall almost all summarized techniques produce reliable explanations and contribute to higher transparency accountability of decision systems.
Marcin Chlebus Marta Kłosok

WP(17)323. Political connections and the super-rich in Poland

Abstract

We use newly collected original panel data on the super-wealthy individuals in Poland (observed over 2002-2018) to study the impact of the rich’s political connections on their wealth level, mobility among the rich and the risk of dropping off the rich list. The multimillionaires are classified as politically connected if we find reliable news stories linking their wealth to political contacts or questionable licenses, or if a person was formerly an informant of communist Security Service or member of the communist party, or when the origins of wealth are connected to the privatization process. We find that political connections are not associated with the wealth level of Polish multimillionaires, but that they are linked to the 20-30% lower probability of upward mobility in the ranking of the rich. Moreover, being a former member of the communist party or secret police informant increases the risk of dropping off the Polish rich list by 79%. Taken together, our results show that, contrary to some other post-socialist countries such as Russia or Ukraine, there is little evidence that the Polish economy suffers from crony capitalism.

Katarzyna Sałach Michał Brzeziński

WP(16)322. So close and so far. Finding similar tendencies in econometrics and machine learning papers. Topic models comparison.

Abstract

The paper takes into consideration the broad idea of topic modelling and its application. The aim of the research was to identify mutual tendencies in econometric and machine learning abstracts. Different topic models were compared in terms of their performance and interpretability. The former was measured with a newly introduced approach. Summaries collected from esteemed journals were analysed with LSA, LDA and CTM algorithms. The obtained results enable finding similar trends in both corpora. Probabilistic models – LDA and CTM – outperform the semantic alternative – LSA. It appears that econometrics and machine learning are fields that consider problems being rather homogenous at the level of concept. However, they differ in terms of used tools and dominance in particular areas.
Marcin Chlebus Maciej Stefan Świtała

WP(15)321. Comparison of tree-based models performance in prediction of marketing campaign results using Explainable Artificial Intelligence tools

Abstract

The research uses tree-based models to predict the success of telemarketing campaign of Portuguese bank. The Portuguese bank dataset was used in the past in different researches with different models to predict the success of campaign. We propose to use boosting algorithms, which have not been used before to predict the response for the campaign and to use Explainable AI (XAI) methods to evaluate model's performance. The paper tries to examine whether 1) complex boosting algorithms perform better and 2) XAI tools are better indicators of models' performance than commonly used discriminatory power's measures like AUC. Portuguese bank telemarketing dataset was used with five machine learning algorithms, namely Random Forest (RF), AdaBoost, GBM, XGBoost and CatBoost, which were then later compared based on their AUC and XAI tools analysis – Permutated Variable Importance and Partial Dependency Profile. Two best performing models based on their AUC were XGBoost and CatBoost, with XGBoost having slightly higher AUC. Then, these models were examined using PDP and VI, which resulted in discovery of XGBoost potenitial overfitting and choosing CatBoost over XGBoost. The results show that new boosting models perform better than older models and that XAI tools could be helpful with models' comparisons.
Marcin Chlebus Zuzanna Osika

WP(14)320. Why wealth inequality differs between post-socialist countries?

Abstract

We provide the first attempt to understand how differences in households' socio-demographic and economic characteristics account for disparities in wealth inequality between five post-socialist countries of Central and Eastern Europe. We use 2013/2014 data from the second wave of the Household Finance and Consumption Survey (HFCS) and the reweighted Oaxaca-Blinder-like decompositions based on recentered influence function (RIF) regressions. Our results show that the differences in homeownership rates account for up to 42% of the difference in wealth inequality measured with the Gini index and for as much as 63-109% in case of the P50/P25 percentile ratio. Differences in homeownership rates are related to alternative designs of housing tax policies but could be also driven by other factors. We correct for the problem of the 'missing rich' in household surveys by calibrating the HFCS survey weights to top wealth shares adjusted using wealth data from national rich lists. Empirically, the correction procedure strengthens the importance of homeownership rates in accounting for cross-country wealth inequality differences, which suggests that our results are not sensitive to the significant underestimation of top wealth observations in the HFCS.
Michał Brzeziński Katarzyna Sałach

WP(13)319. Dealing with uncertainties of green supplier selection: a fuzzy approach

Abstract

Increasing public awareness of environmental protection has caused the emergence of green supply chain management in recent years. As firms tend to outsource a significant part of their activities, the importance of supplier selection increases from a competitive standpoint. While most studies of supplier selection have introduced methods based on economic criteria, the number of studies that incorporate environmental issues is rather limited. In this paper, a methodology is proposed to address the green supplier evaluation and selection issue by first identifying the appropriate criteria and then developing a model for their measurement in the evaluation process. The authors apply fuzzy set theory to deal with the subjectivity of supplier selection decision-making and capture the linguistic terms used for human assessments. A rule-based fuzzy inference system is developed to evaluate suppliers based on ten environmental criteria and eventually select the best-performing supplier. The dynamic nature of the model allows the decision-makers to manipulate the importance of different supplier attributes and constructed rules, based on individual preferences. An illustrative example is also presented to show the applicability and effectiveness of the proposed methodology.
Hayk Manucharyan

WP(12)318. How do managers actually choose suppliers? Evidence from revealed preference data

Abstract

Supplier selection plays a pivotal role in the success of any organization as it significantly reduces purchasing costs and increases corporate competitiveness. At the same time, it is a very challenging task as decision-makers have to tradeoff among different supplier attributes. In this paper, a discrete choice model of supplier selection is developed, based on revealed preference data collected from an electrical equipment manufacturer in Poland. We explore the importance of different attributes for the initial choice and subsequent switching of suppliers. The proposed logit model is proceeded by a nonparametric analysis conducted through the Chi-square Automatic Interaction Detector (CHAID) framework, which serves exploratory purposes. We find that delivery and reliability play a crucial role in decision-making with regards to choosing suppliers and switching them if necessary.
Hayk Manucharyan

WP(11)317. Supplier selection in emerging market economies: a discrete choice analysis

Abstract

This study presents the perceived importance of different supplier attributes for managers' choice of suppliers in emerging market economies. We analyze the supplier selection process based on multiple attributes categorized into six groups: quality, cost, delivery, product, service, and business. Empirical data for this study was collected from 163 corporate executives in the automotive and fast-moving consumer goods industries operating in Poland and India.
A two-part survey was conducted; the first part consisted of a Likert scale set of questions aimed at determining the perceived importance of supplier attributes. The second part of the survey was a discrete choice experiment that examined the actual choices of experimental supplier profiles made by executives. Comparing our results to previous works in this domain, we find that the importance of the cost attribute has decreased over the past two decades, whereas the relevance of delivery and product has increased. Each of the six supplier attributes was broken down into sub-attributes, which provided us with an insight into the decision-making process. The results indicate that with respect to delivery, delivery lead time, responsiveness to demand fluctuations, and compliance with due date had a significant effect on executives' decisions. At the same time, new product availability and product range played crucial role amongst product attributes. Finally, the dataset was split into different sub-groups, based on the two industries and two countries analyzed, to examine industrial and cultural differences.
Hayk Manucharyan

WP(10)316. Investing in VIX futures based on rolling GARCH models forecasts

Abstract

The aim of this work is to compare the performance of VIX futures trading strategies built across different GARCH model volatility forecasting techniques. Long and short signals for VIX futures are produced by comparing one-day ahead volatility forecasts with current historical volatility. We found out that using the daily data over the seven-year period (2013-2019), strategy based on the fGARCH-TGARCH and GJR-GARCH specifications outperformed those of the GARCH and EGARCH models, and performed slightly below the "buy-and-hold" S&P 500 strategy. For the base GARCH(1,1) model, the training window size and the type gave stable results, whereas the performance across refit frequency, conditional distribution of returns, and historical volatility estimators varies significantly. Despite non-robustness of some investment strategies and some space for improvements, the presented strategies show their potential in competing with the equity and volatility benchmarks.
Paweł Sakowski Robert Ślepaczuk Oleh Bilyk

WP(9)315. Size does matter. A study on the required window size for optimal quality market risk models

Abstract

When it comes to market risk models, should we use full data that we possess or rather find a sufficient subsample? We have conducted a study of different fixed moving window's lengths (from 300 to 2000 observations) for three Value-at-Risk models: historical simulation, GARCH and CAViaR model for three different indexes: WIG20, S&P500 and FTSE100. Testing samples contained 250 observations, each ending with the end of years 2015-2019. We have also addressed the subjectivity of choosing the window's size by testing change points detection algorithms: binary segmentation and Pelt; to find the best matching cut-off point. Results indicate that the size of the training sample greater than 900-1000 observations doesn't increase the quality of the model, while the lengths lower than such cut-off provide unsatisfactory results and decrease model's conservatism. Change point detection methods provide more accurate models. Applying the algorithms with every model's recalculation provides results better by on average 1 exceedance. Our recommendation is to use GARCH or CAViaR model with recalculated window size.
Marcin Chlebus Mateusz Buczyński

WP(8)314. Novel multilayer stacking framework with weighted ensemble approach for multiclass credit scoring problem application

Abstract

Stacked ensembles approaches have been recently gaining importance in complex predictive problems where extraordinary performance is desirable. In this paper we develop a multilayer stacking framework and apply it to a large dataset related to credit scoring with multiple, imbalanced classes. Diverse base estimators (among others, bagged and boosted tree algorithms, regularized logistic regression, neural networks, Naive Bayes classifier) are examined and we propose three meta learners to be finally combined into a novel, weighted ensemble. To prevent bias in meta features construction, we introduce a nested cross-validation schema into the architecture, while weighted log loss evaluation metric is used to overcome training bias towards the majority class. Additional emphasis is placed on a proper data preprocessing steps and Bayesian optimization for hyperparameter tuning to ensure that the solution do not overfits. Our study indicates better stacking results compared to all individual base classifiers, yet we stress the importance of an assessment whether the improvement compensates increased computational time and design complexity. Furthermore, conducted analysis shows extremely good performance among bagged and boosted trees, both in base and meta learning phase. We conclude with a thesis that a weighted meta ensemble with regularization properties reveals the least overfitting tendencies.
Marcin Chlebus Marek Stelmach

WP(7)313. Price Transmission across Commodity Markets: Physical to Futures

Abstract

Primary commodity prices are generally determined in dual markets: a physical--spot market dominated by supplier--producers and a forward--futures market where consumers, producers and speculators interact. While the futures market operates on an almost continuous basis, the spot market only opens in predetermined short periods of time over which the state of supply and demand is revealed. This poses a challenge for the question of price dynamics: which market leads/follows and where does price discovery occur? We perform an empirical analysis using spot and futures coffee prices and find that most price information originates from the futures markets. Shocks to the spot price are quickly integrated into the market prices, with the effect of the shock quickly dying out or a new equilibrium being attained. A shock to the futures price almost always leads to a permanent change in prices leading to a new equilibrium.
Gilbert Mbara

WP(6)312. Payment and policy consequentiality in dichotomous choice contingent valuation: Experimental design effects on self-reported perceptions

Abstract

Although the contingent valuation literature emphasises the importance of controlling for respondents' consequentiality perceptions, this literature has rarely accounted for the difference between payment and policy consequentiality. We examine the influence of the randomly assigned tax amount on consequentiality self-reports and their potential endogeneity using data from a single dichotomous choice survey about reducing marine plastic pollution in Norway. Results show that consequentiality perceptions are a function of the tax amount, with payment consequentiality decreasing and policy consequentiality increasing with higher tax amounts. We discuss the challenge of finding valid instruments to address potential endogeneity of consequentiality perceptions.
Ewa Zawojska Tobias Börger, Tenaw G. Abate, Margrethe Aanesen

WP(5)311. Preferences for demand side management—a review of choice experiment studies

Abstract

This review of choice experiment (CE) studies deals with the valuation of electricity supply attributes in the residential sector. We consider the willingness to pay and the willingness to accept changes in the electricity supply. The results could be used to determine consumers' preferences for demand-side management (DSM) programs and could serve as a reference for formulating policies. DSM is an option for constructing a low-carbon electricity system, improving energy efficiency, and achieving the sustainable development of an economy. The results from CEs justify investment in new solutions. The research shows that consumers are open to DSM, but they prefer simple programs to complex ones. Decision-makers could introduce DSM programs that enable power outages and provide compensation for households. The societal advantages of DSM are not obvious to consumers, so the implementation of DSM requires communication and more research on peoples' preferences.
Bernadeta Gołębiowska

WP(4)310. "I go, I pay". The role of experience in recognizing the need for public financing of cultural goods

Abstract

Public financing of culture is a common phenomenon - especially in European countries. Empirical studies reveal that it is socially acceptable and even desirable. However, a question arises: what factors influence support for such a cultural policy? The study shows that the most important determinant is related to experience - past and future, anticipated. People who often and intensively consume various cultural goods, are also more willing to subsidize them through the public sector. The results of the study not only show that regular contact with culture has a positive impact on understanding the important role of the state in shaping the cultural sector, but also that the attitude towards cultural policy changes rapidly after crossing a certain threshold of experience.
Aleksandra Wiśniewska Bartosz Jusypenko

WP(3)309. What do lab experiments tell us about the real world? The case of lotteries with extreme payoffs

Abstract

In this study, we conduct a laboratory experiment in which the subjects make choices between real-world lottery tickets typically purchased by lottery customers. In this way, we are able to reliably offer extremely high potential payoffs, something rarely possible in economic experiments. In a between-subject design, we separately manipulate a number of features that distinguish the situation faced by the customers in the field and by subjects in typical laboratory experiments. We also have the unique opportunity to compare our data to actual sales data provided by the operator of the lottery. Overall, we find the distributions to be highly similar (meaning high external validity of the laboratory experiment). The only manipulation that makes a major difference is that when the probabilities of winning specific amounts are explicitly provided (which is not the case in the field), choices shift towards options with lower payoff variance. We also find that standard laboratory measures of risk posture fail to explain our subjects' behavior in the main task.
Raman Kachurka Michał Krawczyk Joanna Rachubik

WP(2)308. Increasing the cost-effectiveness of water quality improvements through pollution abatement target-setting at different spatial scales

Abstract

In this paper, we investigate the potential gains in cost-effectiveness from changing the spatial scale at which nutrient reduction targets are set for the Baltic Sea, focusing on nutrient loadings associated with agriculture. Costs of achieving loadings reductions are compared across five levels of spatial scale, namely the entire Baltic Sea; the marine basin level; the country level; the watershed level; and the grid square level. A novel highly disaggregated model, which represents decreases in agricultural profits, changes in root zone N concentrations and transport to the Baltic Sea is proposed, and is then used to estimate the gains in cost-effectiveness from changing the spatial scale of nutrient reduction targets. The model includes 14 Baltic Sea marine basins, 14 countries, 117 watersheds and 19,023 10-by-10 km grid squares. A range of policy options are identified which approach the cost-effective reductions in N loadings identified by the constrained optimization model. We argue that our results have important implications for both domestic and international policy design for achieving water quality improvements where non-point pollution is a key stressor of water quality.
Mikołaj Czajkowski Wiktor Budziński Jan Hagemejer Maciej Wilamowski Tomasz Żylicz Hans E. Andersen, Gite Blicher-Mathiasen, Katarina Elofsson, Berit Hasler, Christoph Humborg, James C. R. Smart, Erik Smedberg, Per Stålnacke, Hans Thodsen, Adam Wąs, Nick Hanley

WP(1)307. Valuing externalities of outdoor advertising in an urban setting – the case of Warsaw

Abstract

Outdoor advertising produces externalities, such as visual pollution, that have to be considered in cityscape planning. In recent years, opposition to excessive outdoor advertising in Poland has grown, resulting in the enactment of new regulations in 2015: The Landscape Bill. It allows local authorities to limit outdoor advertising in their municipality. We present the results of a stated preference study aimed at estimating the value that people attach to the reductions of outdoor advertising in Warsaw, the capital of Poland. We considered two types of outdoor advertising mediums: free-standing ads and on-building ads, alongside five levels of advertising reduction. We find that inhabitants of Warsaw prefer regulating and limiting the amount of outdoor advertising and we quantify their willingness to pay for such a policy. The most preferred level of free-standing ads was a 75% reduction, for which the people of Warsaw are willing to pay 5.6 million EUR annually in the form of increased prices and rents to compensate owners' losses. For on-building ads, total ban was the most preferred, valued at 11.3 million EUR per year. Socio-demographic drivers of people's willingness to pay are explored. Overall, our study demonstrates how stated preference methods can be used for informing urban landscape policies and adds to the ongoing debate surrounding outdoor advertising.
Mikołaj Czajkowski Wiktor Budziński Michał Bylicki, Mateusz Buczyński

Services * .wne.uw.edu.pl use cookies stored on users' computers. They are necessary for the proper operation of mechanisms websites Faculty of Economic Sciences.

Detailed information on cookies Operating Policy, University of Warsaw in services are available on http://en.uw.edu.pl/cookie-statement