Blueprints as a programming language
At Epic Games they have been considering to introduce an intermediate language between Blueprints and C++.
Here's an idea they could entertain: How about designing a text-based intermediate language that behaves like Blueprints?
Blueprints are easier to use because they help in reasoning over concurrency and state. They do this by associating state to the occurrences of expressions. The same idea can be implemented in text-based programming, though it is unfamiliar for most programmers for obvious reasons.
You could do this directly. If you were to write an actor that is destroyed after a delay when the player presses 'K' -key, the program would take the following form:
on key(K) {
delay(2.0)
destroy(self)
}
Here's the corresponding Blueprint:
If you were to interpret this program like you interpret a Blueprint,
then the 'delay' introduces its own state
and would not pass through if it's still delaying an existing signal.
Expressions like delay(2.0)
and explode
would convey pockets of state
that are instantiated and destroyed along the actor.
There would be a lot of work in getting the details right so I propose that this would be done patiently rather than by rushing it through. Though I think that the work toward this kind of language has already sprung from the study of linear logic. Lot of the existing programming language theory can be reused as well.
Discoverability
There are usability aspects in Blueprints that should not be left aside. Lots of it comes "for free" in a graphical language and make the language more discoverable.
Parsing would have to be incremental. There are some incremental parsing algorithms and I think there are already handful of such that would be reliable enough to be used.
The semantics of the language would be equally unfamiliar to everyone, so it'd be a good idea to have the program highlight parts that are running at each moment.
Blueprints also have the benefit of being incompatible with autocompletion. Intellisense or autocompletion is exceedingly stupid and jarring feature that doesn't have a rival in this world. When you're typing you've already decided what you're going to write. Intellisense jumps in to cover half of the screen, slow down and interrupt typing while helpfully introducing few typing errors that would have been missed without it.
To help people work with the scripts, provide a tray that contains the stuff that can be added at any moment, a bit like the tray that appears when you right-click on the Blueprint screen. Keep it away from the writing area! Next have a feature where, if the user hovers mouse over the text, the tray displays a manual for the operation and allows to look for similar operations.
The language should come along with full type-inference that can run in the editor, this should be used along the help-tray to allow people to construct their program while filtering out pieces that do not match, like they've been able to do in the Blueprints editor.
Helpful ideas with type inference
Type-inference such as it's been implemented in Haskell or SML would be ideal without its incompatibility with the tradition. Fairly minor changes can do wonders though.
Constructs such as polymophic types and typeclasses, treatment of type inference as a constraint solving puzzle are the foundations you need to make type inference comfortable.
This requires that the structure of operations such as multiplication takes in same kind of structure as they return. These kind of rules have usually brought trouble with matrix multiplication. The usual vector arithmetic libraries are troublesome to implement anyway and contain a lot of repetitive work. To avoid this I would suggest doing something like they did in APL. Implement a tensor notation that make it easy to write matrix multiplication.
Quaternions usually behave quite neatly, but you might like to consider how geometric algebra fares in this context.
Inability to implement implicit coercions is an usual game-stopper for picking up type inference. I'd propose to avoid implicit type coercions as they are messy to begin with. Though note that I think that Haskell also does this wrong in the sense that it's eager in deciding a numeric type for a structure.
Turn most common numeric types such as integers, rationals and irrationals into type classes and treat them as if they were constraints. Separate representation of numbers from the machine representation of numbers. If you do it right, you also allow symbolic computer algebra in that framework.
Discoverability of many languages is hampered badly by implicit global space that is introduced through attribute access. You can avoid this by treating attributes as local names and provide them through modules. I suppose this also allows profunctor optics to be implemented in the same framework.
That's enough for now
Tim Sweeney points in the reddit that...
We see the introduction of another programming layer as a decision that’s binding on everything we do for a decade or more so we are cautious about quick patchy decisions.
This would be especially true if they actually pick on the points on this post. Some of the papers I've read recently about linear types, proof theory and typing of mutability and interaction are just few years old. Most of the papers have been written on the 1980-2010 range.
Also, my proposal could be rather mediocre eventually. I've been recently interested about logic programming augmented with interaction. Simplest application seem to be turn-based games described with logical formulas and synthesis of AI for those games.