Thursday, July 26, 2012

Problems with D

D has a lot of nice features. But it also has...
  • Unintuitive syntax:
    • advanced import syntax is rather baffling to me
    • contents of "is(...)" looks like an expression, but it's not
    • 4-5 meanings for "static"
    • unnecessarily confusing tuple syntax (e.g. implicit tuple expansion)
    • inconsistency between template declaration and call site ("foo(X, Y)(X x, Y y)" vs "foo!(X, Y)(x, y)")
    • even the humble foreach loop--why foreach (x; y) instead of foreach(x in y), given that "in" is a reserved word anyway?
  • Emphasis on C heritage: "break" in switch cases, function definitions are not visually distinctive or greppable, no pattern-matching or destructuring, no statements-as-expressions (e.g. str = switch(x) { case 1: "one"; case 2: "two"; }), overuse of 'static' and parenthesis...
  • Too many different rules (e.g. symbol lookup works differently for modules than for static symbols in classes and mixins. lookup rules are also different for UFCS/member functions, although in this case there is a good reason for the difference.)
  • Types and identifiers seem to share a single namespace:
    class Door { }
    class Fridge {
      Door Door; // I do this often in C# and even C++, but illegal in D.
    }
    
    This severity of this issue is reduced by the fact that field and function names in D tend to use lowerCamelCase while type names more often use PascalCase, unlike C# where both types and member names use PascalCase.
  • Syntax feels ambiguous in some places, as if it wasn't fully thought out. TODO: find a good example.
  • Users cannot make contracts work in production code (Release builds)
  • String mixins will be hostile to IDE analysis and refactoring tools; we need to think of a better approach to metaprogramming.
  • Using CTFE (compile-time function evaluation) in combination with static if (including "if" for overload resolution) will also be difficult for IDEs due to their unbounded CPU usage, but this feature is so powerful it's probably worth the trouble.
  • Pet peeve: "ref" and "out" are not allowed at call sites. I don't mind if D does not require "ref" and "out" at call sites, but it should at least allow it. For clarity, in C++ I already use "OUT" at call sites as well as in function declarations (it's pre-#defined on Windows in WinDef.h), but D doesn't have a preprocessor so it can't support the same.
  • foreach can behave in several different ways. The way the compiler selects between these behaviors is unintuitive.
  • D seems not to have enough rules to prevent escape of stack pointers into heap objects or parent stack frames:
    int* ptrEscape()
    {
      int x = 7;
      return oops(&x);
    }
    int* oops(int* ptr) { return ptr; }
    
    // For extra fun, try this:
    int doubleOops() { int* x = ptrEscape(); int y = *x + 1; return *x + y; }
    enum DoubleOops = doubleOops();
    // Result in DMD 2.059 Windows:
    // Assertion failure: '0' on line 2018 in file 'interpret.c'
    // Plus a message box appears: "abnormal program termination"
    
  • D has nice metaprogramming facilities but seemingly not as good as I was hoping; for example there is no custom attribute system, and I don't think there is a way to send a D AST to a template, so one cannot write compile-time macros that restructure D code. Also, D compiler-as-a-service is not yet implemented.
Frankly, I wish D would copy more ideas straight from C#, because C# is mostly quite well designed, although it doesn't have as many features as D (notably absent: slices, mixins, compile-time templates, scope(exit))

Tuesday, July 10, 2012

Async/await vs stack-switching

(In response to)

Before we can talk about stack switching being "better" or "worse" we have to have some idea of how the two approaches correspond to each other, and a sense of whether they are equivalent or if one is more "powerful". The four cases you've identified are (1) Fire-and-forget, (2) Await immediately, (3) WhenAll, and (4) the ArchiveDocuments function.

To be honest this it a bit difficult for me because I've never used async/await OR stack switching. But I'll give it a try.

Let us assume that the stack-switching system has some "cooperative scheduler" that is in charge of switching between stacks and deciding who runs next. Each thread would have its own stack scheduler. Maybe it shouldn't be called a scheduler at all because it is not preemptive. I will call it a scheduler anyway. Presumably we can invent mechanisms by which different threads can communicate as necessary, but for simplicity I'm going to imagine only a single thread plus a mechanism for receiving asyncronous events such as "file read complete", "received UDP packet" or "timeout expired".

So here are the equivalents in a stack-switching system:
  1. Fire-and-forget: You just want a task to run and don't care when it completes. With a stack-switching mechanism, the task would be created on a new stack and run (synchronously) until it hit an async operation. Then the scheduler mechanism (based e.g. on a Windows Message Queue) saves the stack, switches back to the calling stack, and later, when the async operation completes, enqueues a message that will invoke the new stack to handle the event. This process repeats until the task is done.

  2. Await immediately. In this case the code using await (let's call it B) must be marked async, and in a stack-switching world this means that B must be able to run in its own stack so that it can be paused, and it could be marked with an attribute to make this happen, or (in a language that doesn't support stack-switching directly) could be called with the help of some special wrapper function that creates the stack. B got the task from some other method (let's call it C) that ALSO has its own stack, and so it initially appears the stack-switching approach would suffer from a proliferation of stacks. However, since you are awaiting immediately there's really no need for two new stacks, so some way is needed when calling C to say "don't create a new stack, just use mine". That should be easy, it just means we call C directly. Then B, not C, gets its own stack. Now, "await C()" is implemented by yielding the current stack (asking the scheduler to switch to some other stack - I am not sure about the fine details yet) and typically the scheduler would return to A, the caller of B. At this point we must shift our focus to A to figure out what happens next.

  3. WhenAll. I doubt WhenAll is itself implemented with async/await, since you can await only one thing at a time and WhenAll wants to wait for several things. So WhenAll doesn't create a stack, but it subscribes to a "stack destroyed" event on each stack, and creates some kind of Task object that manages all this. In each "stack destroyed" event, the WhenAll task checks whether everything is done and if so, asks the scheduler to switch to B (who used "await WhenAll"). The "await WhenAll" command itself corresponds to signing up to be notified of the completion of the Task, and yielding the stack. I expect the user would initiate this with single command, so the code should still be easy to follow. Some of what I said in (2) applies to (3) because you are "awaiting immediately" on WhenAll.

  4. This beast (where have I seen this before?), is a good example the power of async+await:
    async void ArchiveDocuments(List<Url> urls)
    {
      Task archive = null;
      for(int i = 0; i < urls.Count; ++i)
      {
        var document = await FetchAsync(urls[i]);
        if (archive != null)
          await archive;
        archive = ArchiveAsync(document);
      }
    }
    So let's figure out the stack-based equivalent. Since ArchiveDocuments is async, it typically runs on its own stack. This method makes a direct (non-async) call to FetchAsync because it wants to block until FetchAsync is complete; however, it creates a new stack to call ArchiveAsync. It invokes ArchiveAsync immediately, but the scheduler returns to ArchiveDocuments as soon as ArchiveAsync pauses for the first time (e.g. when it writes something to disk asynchronously).

    Finally, "await archive" is implemented by asking the scheduler to block this stack until the archive task is done executing. The scheduler returns to the caller of ArchiveDocuments or to whatever it decides should run next.
Okay, that's it. As far as I can see, the stack-based approach is as powerful as aync/await, the way it operates is just different. Calling an "async" method without "await" generally requires the creation of a new stack, but if you want to immediately await the result, you can just make a direct call.

Now, we can start to think about which is better. In fact, neither is better in every case; each has benefits and drawbacks. First I'll mention some relevant performance facts:
  1. The async/await system is optimized for cases where the task is not actually asynchronous. If the task completes immediately, no extra memory is allocated to track the state of the task. I am not sure whether stack-switching could be optimized for that case. I suppose if the system has a few "scratch stacks" laying around, it would be cheap to switch to one and release it when done, without creating new heap objects for bookkeeping. However, the parameters to the "async" function would have to be chosen on the original stack (because their values may depend on data in the original stack) and then copied to the new one. So when you add up the bookkeeping, it appears this case will be slower than async/await. However, another approach would be to NOT create a new stack by default. The potentially-async method runs on the same stack as its caller, but if it needs to pause, the scheduler is invoked to move the method's stack frame to a new stack and then returns to the caller on the old stack. Thus it is optimized for the non-async case, with a small penalty if the method needs to pause.
  2. I've made some notes on the stack switcher before, but it bears repeating: new stacks on x86 require a minimum 8KB of virtual memory (but realistically more) and 4KB of physical memory. Most likely the stacks would be allocated on a special heap and would be subject to garbage collection or ref-counting. To save memory, the stack manager could copy small stacks that have not been used recently into heap blocks, and then copy them back to "scratch stacks" when they are eventually invoked. User code would have only handles to stacks, not pointers, so that the stack manager can freely move stacks around. When it is necessary to move larger stacks around, the stack manager would use system calls to rearrange virtual memory instead of physical memory (this is possible because stacks are aligned to page boundaries).
  3. The async/await system is implemented entirely within the compiler, so async methods become classes, and local variables become members of the class. Perhaps a kind of switch statement is used to jump to the middle of the method when it is resumed, or maybe the function is completely rearranged somehow.
At last, we can discuss pros and cons of stack-saving vs. async/await:
  1. Pro: IMO, stack saving is easier to understand. Everybody understands what a stack is and how it works; with stack switching you just have to know that there are a bunch of stacks and a scheduler automatically switches between them for you. It's a pretty straightforward extension of multithreading, except that stacks are much cheaper than threads, and you don't have to worry about synchronization if you set up your tasks to run on the same thread.
  2. Pro/Tie: methods that use async/await run slower because their local variables become class members, access to which cannot be optimized as well by the JIT. (Edit: then again, while the code in the stack may run faster, overhead for allocating and switching between stacks may be higher.)
  3. Pro: async/await must be implemented independently for every compiler, but stack-saving does not require any compiler support. It would automatically be available in any language that targets the CLR. Syntactic sugar might be helpful for some operations, like calling a method on a new stack, but it would not be required.
  4. Pro: suppose method A calls task B on a new stack. B calls a function C that doesn't know it is involved in an async task, and then C fires an event D, and D yields the stack (suspending B, C, and D). This scenario is not possible with async/await: C cannot be paused if it is not an async task. To put it another way, async/await is "infectious": if you want to run a stack of functions asynchronously, all of them must be marked async.
  5. Con: Instead of compiler support, stack-saving requires support within the CLR itself. But I think async/await was developed prior to the recent deployment of a new CLR, so MS had the chance to add that support if they had wanted to.
  6. Con/Tie: creating a new stack might be slower than creating a Task object that holds the method's state. But perhaps it could be optimized to be as efficient in the average case.
  7. Con/Tie: A naive stack manager would use a lot of memory, e.g. 8KB RAM and 16KB VM per stack. But perhaps memory use would be similar in the two systems if they are both optimized properly.
  8. Tie: it's not clear whether resuming a stack is faster or slower than resuming a .NET async Task.
  9. Pro: Stack switching can be brought to other non-.NET environments and languages. If it were implemented in native code, for example, C code could use it (actually, Win32 has "Fibers", but I wouldn't expect high efficiency from them.) Any skilled developer with very low-level knowledge could implement it (although the number of such people is few, hence it rarely happens.) In contrast, the async/await feature must be implemented independently for every language.
  10. Con: the implementation of stack-switching is different on each operating system and would need a bunch of scary assembly code.
  11. Con: stack-saving works great if you only need a few stacks. But if you need a large number of stacks that are all called frequently, they will use a lot of memory. An example would be a video game with thousands of individual "actors" that each get their own stack or async Task, where each actor is called 100 times per second. This would require more memory in a stack-switching system and would probably be harder on the cache, too.
  12. Con: we live in the real world. MS just ain't gonna support stack switching, and it seems overkill to have these two mechanisms that do the same thing. However, stack switching should still be considered outside .NET land.
Well, on the whole it looks like the pros and cons just about cancel each other out. And #12 is a doozie. Personally, I prefer stack saving due to points 1 thru 4. (1) I find stack switching to be a more intuitive concept, (2) optimized stack saving might edge-out async/await in performance (3) there are numerous compilers for .NET, some 3rd-party, at it would sure be nice if they could all get async support "automagically", and (4) maybe stack-saver is more powerful, after all.

P.S. I just learned that the Go languages uses a stack-switching system to implement Goroutines.

Drawing diagrams quickly

I have always wanted to make a note-taking program myself, but don’t have the time. Unfortunately no one appears to have the same UI idea as me, so let me share it with you.

What I really, really want to have is a "all common tasks" tool, something that can draw the four most common shapes + text + common editing tasks in a single tool. This tool would be sticky (you have to switch out of it manually)
  1. Draw a box with optional text: click and drag a box. Then if you want text inside, simply start typing and it will appear centered in the box with word-wrapping.
  2. Draw an ellipse with optional text: click and drag an oval. If you want text inside, start typing.
  3. Draw a line with optional text: click and drag something line-shaped. Then if you want text along the line, simply start typing an it will appear above or below the line (I don’t care which), centered. If the line starts at the edge of another line or box, it is snapped to it.
  4. Draw a curved line: click and drag any other shape. Ideally the shape will be straightened out slightly so it doesn’t look like it was drawn by a mouse or a toddler.
  5. Draw a dot: click anywhere on the canvas.
  6. Draw free-floating text: click anywhere on the canvas and start typing. The dot is replaced by text, unless the user begins with * and then the dot begins a bulleted list. (Bonus points if, somehow, the user can easily select a point at which to word-wrap.)
  7. Hold Alt to pass mouse input to the "arrow" tool (select, move, resize existing shapes)
  8. Hold Ctrl to restrict input to straight lines at 0°/30°/45°/90°. Instead of drawing curved lines, straight lines with corners come out. Instead of ellipses, rectangles with rounded corners come out.
This tool becomes even more useful if there are some keyboard shortcuts to:
  1. Toggle arrows (===, ==> or <==>) on the shape that was just drawn (or a shape that is selected).
  2. Jump to user-defined visual styles (that group together text size, line thickness, dotted/dashed/solid, background colors, maybe even the type of shape), e.g. by Ctrl+1, Ctrl+2, etc.
  3. Edit the text some time after it was written. Actually, this really needs some kind of mouse interface.