Errorinos, or, where did the squiggle end up?

Working on stuff for a UC Berkeley project involving TensorFlow Keras deep-AI.

These prediction series are closed-boundary on the starting end by the linear baseline (windowing off of a straight-line continuation of the last timestamp of the input series). At the end there is a convergence to a forecast price trend. The three graphs are produced using slightly different network topologies involving Long Short Term Memory cells. The middle algorithm, the bidirectional LSTM, has decent performance and trains in half of the time the feedback model requires. Although sometimes two layers are given as an example recurrent neural network, this implementation uses a single layer between two LSTM cells. The input cell has the same shape as the input data, which is 8-dimensional (2 price points and 6 periodic waveforms). Sandwiched between the input cell and an output sigmoid activation function is a bigger 256-block LSTM cell that can send data back to the previous cell. In this configuration the cells are not bound to FIFO Round Robin feedback rules. This means the network can learn by feeding itself windows in reverse, which is non-intuitive for a time-series.

This is the error quantifier I’m working on now. The units are errorinos. What I know about them so far is BTC has a lot of them, and Tether has almost none.

    a['Market Data'] = (a['Actual']+a['Actual mid'])/2
    a['diff'] = np.square(a['Predicted']-a['Market Data'])
    what = np.mean(a['diff'])/np.mean(a['Predicted'])
    print(what)

Here is the application flowchart:

Further reading:

A good recent paper on LSTM

TensorFlow codes for time-series

Isn’t the whole point that you want to predict “unknowns” in this? If you take the language model analogy, you want to predict unknown words, but that will never happen? :slight_smile:

It is like throwing darts at a target… you can get 50-50% pretty easy, but an algorithm gets you over that by a handful of percent. It’s like giving the target magnetic properties, using knowledge of the past.