# Dependent and linear typing on representing hardware memory

Lets imagine that we'd already have a dependent linearly typed language, and I'd like to model computer memory and arrays with it.

There would be something like bytes. They'd behave like modular numbers with the domain ranging from 0 to 255.

``````0   :: byte
``````1   :: byte
``````...
````255 :: byte````

Bytes can be freely copied around without constraints, as they are values. Next introduce reads and writes:

``````read :: byte ⊸ byte
````write :: byte ⊸ byte ⊸ 1````

The reads and writes follow `(offset,value)` -pattern. Now we've introduced memory reads and writes into a small address space. Next we'd like to describe what an array is. We could just start in plain code:

``````array_read  :: byte ⊸ [byte]
``````array_write :: byte ⊸ [byte] ⊸ 1
``````array_size  :: byte ⊸ byte
``````
``````array_read offset = reads (read offset) where
``````    reads 0 = []
``````    reads n = read (offset+n) : reads (n-1)
``````array_write offset values =
``````    write offset (length values)
``````  ; writes (offset+1) values where
``````        writes offset []  = 0
``````        writes offset (head:tail) =
``````            write offset head
``````          ; writes (offset+1) tail
````array_size offset = (read offset) + 1````

It loans bit of syntax from haskell to represent everything, also we have things such as `(;) :: 1 ⊸ 1 ⊸ 1` and `0 :: 1` plus addition/subtraction with bytes. The semicolon is a parallel operator.

There are few problems in this simple example that glare out from it.

``````1. If we were modeling low-level programs this way,
``````   what would we do with lists?
``````2. Technically these are not arrays,
``````   but dynamic arrays with the length written into them.
``````3. All definitions are hopelessly recursive,
``````   we define the thing three times
``````   yet haven't proven the definitions are consistent.
``````4. We have no ways to determine whether parallel access
````   is causing a race condition or not.````

The last point is interesting in particular. What's a sufficient proof that the above definition here is consistent?

If we write some value and then read it back, it should be the same value.

``∏n:byte. (array_read n . array_write n) ≡ id``

If we write some value and then read it back, in middle write anything into the byte outside the value, we should obtain the same value.

``````∏n,trash:byte. ( array_read n
``````               . array_write n ) ≡ id
````             ; write (n+array_size n) trash````

That's how we would represent it, but it'd appear that it's not easy unless we know something more about the main memory itself. For example, we don't know whether `(write n . read n) ≡ id`.

To model fixed-size arrays we'd have to do something like this:

``````array_read  :: ∏len:byte. byte ⊸ Σlst:[byte]. (length lst ≡ len)
``````array_write :: ∏len:byte. byte ⊸ ∏lst:[byte]. (length lst ≡ len) ⊸ 1
````array_size  :: ∏len:byte. byte````

Dependent types describe this kind of relations where a type is chosen by a value in the program. Technically it's building parts of the program only after we know what value is given into it.