Kalman filter and currencies strength
1 2

thread: Kalman filter and currencies strength

  1. #1

    Kalman filter and currencies strength

    1 Attachment(s) Lately I'm playing around with Recursive Bayesian estimators (http://en.wikipedia.org/wiki/Recursi...ian_estimation) for which the Kalman filter is a special case. I won't enter into the gory details; visit Bayesian Forecasting and Dynamic Models by West and Harrison.

    Initially I had been trying to gauge the trend and the volatility of this market... But that's not what this article is all about. It is about why you would better be long on the Dragon today.

    The market return isn't Normally distributed but someplace between Cauchy and Student distributed. Kalman filter performances degrade for inventions. That's why I use another Kalman filter that estimates the error of the initial one and I feedback this information into the first model as a modulation of the permitted variance of this state. Lt; smoothness is lagged by A sort of cursor.

    This first model is a second order polynomial local estimation. I picked a 2nd order polynomial since it can approximate (Taylor) any smooth function just like a sine wave for a market using a cyclic component (range or valatile fad ) or an exponential trend (such as index and stocks). The accelaration factor helps ching up using the price in case of a movement. A model that is constant is just used by the second filter. I utilize H4 to evaluate the daily fad.

    Here is a screenshot of EUR/USD H4. The blue line is the perfect, but non-causal, low pass filter sinc (http://en.wikipedia.org/wiki/Sinc_filter) used with 41 samples. It gives you an idea of this lag. The filter is green when the trend is probably red and up otherwise. The dashboard envelope is the 95% confidence interval of the price estimate. Below are the error between the price and the estimate (black) and this worth filtered using the 2nd filter (red). It doesn't adhere to the mistake too much to not make the most important filter over-react.
    https://www.cliqforex.com/trading-sy...t-trading.html

  2. #2
    Member rheny's Avatar
    40
    The market return isn't Normally distributed but somewhere between Cauchy and Student distributed. Kalman filter performances degrade quickly for non-Normal inventions. That's why I use another Kalman filter that estimates the error of the initial one and I feedback this information into the initial model as a modulation of the allowed variance of this state. A sort of lively cursor lag lt; smoothness. This first version is a second order polynomial local estimation. I chose a 2nd order polynomial because it can approximate (Taylor) any smooth function...
    Are you talking about ARMA processes which generate the price movements? I tried before to get a time series estimation for daily data on GBPUSD, partial autocorrelation revealed significant coefficient only for Y(t-1) (partial autocorrelation of Y and Y(t-1)), so the price movement can be basically called non-stationary AR(1) process with error e mail, or in other words random walk with drift.

  3. #3
    Junior Member NIEBG's Avatar
    9
    I have a quote of the most likely trend. It is necessary to keep in mind that it is a likelihood not a certainty! I am able to utilize this information to define if EUR/USD is probably down or up. OK I hear you, I may also look at the chart and see if it's raising or falling:--RRB-. You're right. However, the thought I had is to replie this with each of those majors, not only to understand if they are down or up but just how much they are down or up. If GBP/USD is moving down it means USD is stronger than Pound. If EUR/USD is down also EUR is weaker than USD overly....
    Because of this I use the proportion of variation. By dividing a price difference by a price you get a unit-less value. This way I eliminate this unit/scale and I will compare EUR/NZD and USD/JPY directly.

    Nice thought. That alone ought to improve any currency strength indiors you'd think. Correlation indiors as well. Maybe even moreso? no?

  4. #4
    Nice thread! Have you attempted toying around with additional Machine Learning algorithms? I've noticed the widespread use of xgboost and gbm inside the machine learning community. Edit: I also have read elsewhere on utilizing the sparks from ARIMA as a feature too. However to have attempted it thou so Im not sure bout the potency of that.
    I coded a few ML algo myself mostly to learn and understand what they do and how they do it. My finding is that ML is to be used if you know what you're looking for not the other way round. I mean that data mining won't function for market data which sample size is ludicrious little for things like neuron network. Not even talking about nets. It can overfit fluke relationships.

    I do not use packed ML software since they are usually not open and pluggable. You can not find out what is inside and less modify their inner state. If I didn't code the kalman filter itself I couldn't have added my little trick to take coloured noise into account without modelling this noise in the input matrices.

  5. #5
    Are you speaking about ARMA processes which generate the price movements?
    I am speaking about ARMA processes which explain the price movements. Follows the tendency but does not generate it.

    I tried before to find a time series estimation for daily information on GBPUSD, partial autocorrelation showed significant coefficient just for Y(t-1) (partial autocorrelation of Y and Y(t-1)), so the price movement could be basically called non-stationary AR(1) process with error e , or in other words random walk with drift.
    Raw price series are really very autocorrelated while their yield are (almost?) not. Any time series could be modelled with drift as a RW. Estimating this drift from market information is hard.

  6. #6
    Senior Member sarapano's Avatar
    279
    quote I coded a few ML algo myself mainly to learn and understand what they do and how they do it. My finding is that ML is to be used if you know what you're searching for not the other way round. I imply that data mining won't function for market data that sample size is ludicrious little for matters like neuron network. Not even speaking about deep nets. It can only overfit fluke relationships. I don't use ML software since they are normally not available and pluggable. You can not find out what is inside and even less change their inner...
    Fair point about the inherent'black box' feature of ML algorithms.
    So you are saying that you mainly do manual adjustment of parameters utilizing techniques such as kalman filter? Hows it coming along so far?

    I have been toying around with ML calculations had some small but statistically insignificant results. That appears to be a massive problem as all results are always statistically insignificant!

  7. #7
    So you're saying that you mainly do manual adjustment of parameters utilizing techniques such as kalman filter? Hows it coming along so far?
    No. I mean that KF is meant for iid white noise with noise matrices. Imagine if you want estimating these matrices while filtering in the data? That isn't typically allowed by the blackbox software .
    Market data is corrupted with colored noise. Because the autocorrelation of the noise a trend filter oscillates because it follows the swings. The typical way round this is to mimic the noise in the matrices (GPS data processing). Is based on ARMA. But ARMA can't fit the residulas well... What do you do? Using blackbox software you can't easily insert a correction involving the predict and the update step that employs the current condition. That is what I do. It stabilizes the filter (a little ).

  8. #8
    Senior Member sarapano's Avatar
    279
    quote No. I mean that KF is intended for iid white noise with known noise matrices. Imagine if you need estimating these matrices in the information while filtering? The blackbox software typically don't allow that easily. Market information is corrupted with colored noise. Because of the autocorrelation of the noise a trend filter oscillates because it follows the swings. The typical way round this is to mimic the noise in the matrices (GPS data processing). All the modelling I discovered is based on ARMA. But ARMA can't fit the residulas well... What do you do? Together with...
    Apologies im sort of lost in all that language.

    What I do is I create lags of the fees and data that as a feature into the machine learning algorithm. Along with some other features that I feel may be significant.
    Similar to a previous point you made, these algorithms are like black boxes and it'd be near impossible understand about the connection amongst factors...

    Well, guess its back to the drawing board!

  9. #9
    That is what people do. You throw a lot of features that you expect may contain some useful info and you hope the NN will discover it. You include some loss function, frequently a few steps ahead prediction, and here we go. After few minutes/hours that the algo spits a model out. However, given the dataset is very small, highly correlated and very noisy, what you get is a neural net that learnt by heart its enter (if deep enough), or one that saw a pair of ad hoc rules only working for this dataset, or a model that says that the price is 1.0000 /- a lot of noise. This is contingent upon the amount of regularization you've put. The more attributes you include the worse it gets due to the curse of dimensionality.

  10. #10
    Junior Member Alvaro207's Avatar
    24
    quote No. I imply that KF is meant to get iid noise with sound matrices that are known. Imagine if you want estimating these matrices while filtering from the data?
    Which are sequential monte carlo smoothing techniques (rao blackwellized etc..) Do you understand Like compare to kalman filters for noise filtering?

  •