Friday, June 7, 2013

In terms of staying agnostic from Oracle Suite 11g, BPMN2.0 is already a standard. If we want to stay abstracted from business process reliance on the BPM Workspace, though, we should be using our own human task construct. There's another standard to use around this, and we should be using this to map onto our next BPM vendor. There's also WS-ORG and WS-Identity. It looks like JBPM uses WS-ORG natively, which would give us a nice migration later on. There can be a fallacious failure to wire together components, especially a deprecated business rule, which will be erroneously represented as being from composite.xml. It is in fact inside the SOA part of your BPM Navigator, inside your process.composite. A full text search across your project should find it. The BPM API silently fails to produce an authenticationService if you are not an admin user - it's null. Scratch that, it looks like it just fails to produce an authenticationService full stop. It's looking increasingly like we should be going with web services, which are a little laborious, but at least they're decipherable. This proprietary, undocumented API layer with no community around it is a little bit terrifying.

Monday, June 3, 2013

Oracle BPM 11.1.1.7 ps6 custom integration

3 years since my last post. Moving on, now I'm doing some Oracle stuff. Oracle BPM (Business Process Management) is an incredibly expensive, somewhat competent, moderately usable system which sits on its SOA stack, in its contrived Fusion Middleware layer. We're using it to build executable workflows for the complex processes which govern Universities. But we can't use the default view layers for our students for a whole bunch of reasons. So, since the students mostly need to see progress, we're going with our own custom view layer. BPM is service driven, so this shouldn't present a giant problem. There are two main elements going forward (not counting all the fancy dan javascript and whatever): 1) A Model which encompasses all the service calls into the Oracle provided client API, and 2) A bunch of forms which will generate payloads to add into the tasks so that the engine can process them and proceed. Model:
 package code.model{  
  import scala.collection.JavaConversions._  
  import oracle.bpel.services.workflow.client._  
  import oracle.bpel.services.workflow.query._  
  import java.util.HashMap  
  import IWorkflowServiceClientConstants._  
  class Worklist(host:String,username:String,password:String, protocol:String = "SOAP"){  
   val client = protocol match {  
    case "SOAP" =>  
     val properties = new HashMap[CONNECTION_PROPERTY,java.lang.String]()  
     properties.put(CONNECTION_PROPERTY.SOAP_END_POINT_ROOT, "https://%s".format(host))  
     WorkflowServiceClientFactory.getWorkflowServiceClient(WorkflowServiceClientFactory.SOAP_CLIENT, properties, null)  
   }  
   def tasks ={  
    val queryService = client.getTaskQueryService  
    val workflowContext = queryService.authenticate(username,password.toCharArray,null)  
    val queryColumns = List("TASKID","TASKNUMBER","TITLE","OUTCOME")  
    val optionalInfo = List(ITaskQueryService.OptionalInfo.PAYLOAD)  
    queryService.queryTasks(workflowContext,  
     queryColumns,  
     optionalInfo,  
     ITaskQueryService.AssignmentFilter.MY_AND_GROUP,  
     null, //No keywords  
     null, //No custom predicate  
     null, //No special ordering  
     0,  //Do not page the query result  
     0)  
   }  
  }  
 }  
I'm including full import trails because Scala is really touchy about versions and I'd like people to be able to follow along if they need to. Like, you know, me in two years. The second thing is going to be working out the payloads. I know they're going to be XSDs sitting somewhere in or around my metadata repository. I just haven't worked out where yet.

Saturday, November 6, 2010

Sproutcore finally functioning

So the first thing I did was check for Javascript dependency management. I've been down this road a little way before, and I know that the last thing I want to be doing is writing everything from scratch. There are probably 500 frameworks out there that want to be the last word in cool RIA development and will be patronisingly offended that all I want to do is write a pissy little engine in them. That's fine, they'll tolerate me until they get an option on a 3rd user and then I'll be out in the cold again.

I need something that's going to minify, lint check, modularize and autoinclude. I want to be able to split up my code into as many files as I need to for the logic to make sense to me, without having to go back to some big kludgy main.html includer and append into the head.

I want a framework that does that for me. If it can also bring in jQuery, Raphael, Flot, that sort of shit automagically and guarantee compatability that would be nice. Obviously I need to be pretty cross browser, right up to the point of not having HTML5 canvas to work with. Even then there are probably hackarounds. I'm honestly not looking to push it too hard. I think the bulk of my logic will be scene graph handling, which is quite similar to model stuff.

Key handling is important. No, fuck it. I'll just have a point and click interface. Not like there's a keypad on an iThingy.

So yeah, I went and fished around. JavascriptMVC, Cappucino, Sproutcore all sort of look the business. I don't care how many Desktop style widgets they ship with because I won't be using them. I do care about whether they give me a hot compiling server off the command line so that I can one click build, deploy and run. I don't want to fuck around with Windows 7 trying to server its own files locally, I want to write a little green guy walking around the screen.

My fucking god SproutCore took forever to get running.

First I made the terrible mistake of using the newest versions of everything. That 'everything', by the way is:
sproutcore
thin (these two are both gems, thin is a webserver. And how the FUCK did I end up in Rubyland? Anyway...)
devkit (some thing we apparently need to build on windows - has MinGW in it or something)
ruby

So after about two hours of trying it with ruby 1.9.2 because someone in a Sproutcore field note said it was possible on 1.9.1rc something (it wasn't), I went back to 1.9.1rc. And then 1.8.7. Each time, reinstalling all the gems and whatnot. When did I figure out that it's 5000 times faster if you don't install the fucking rdoc? At the end. Of course, at the end.

Then there was a pleasant time of me hitting localhost:4020 when it really needed to be localhost:4020/myApp, and of course it didn't say a word. Neither in server output nor in client land.

Let's hope it gets nicer. I'm optimistic. At the very least it seems to me that it's going to make a bunch of deployment stuff work more nicely than my fucked up bundles of js.

A new project! Mine, this time.

In 5 years I want to be floating in a pool of pina coladas, digging large holes in the garden for the pleasure of later filling them in and generally being incredibly obnoxious to people who have to work for a living while I explain that I don't use VB or C# because they're too mainstream and I would only ever code for the love of it now.

So here comes brand new project # 431...

Constraints:

It should give me something to think about which isn't work.
It should reinvent my old game To Catch a Thief, and put it into a context. What context? A context where I could A: Get some visibility on my funny game and B: Eventually make some money out of it. (See above, re Pina Coladas). What's a pina colada anyway? I think it involves orange juice.

I think there are a few places where money flows on the web:

1) App store
2) XBox Live
3) ITunes
4) Blog advertising

And I'm not sure about the last one. Certainly if you take the amount of money my family spends on online services they go on 3,2,1,4 in that order. And I'm not aware that I've ever clicked on a blog ad. Mostly because people who advertise on their blog always seem to me to be spammy. I'll eat those words when it seems like a good idea to me to do so. Yum!

So the problem with App Store is I don't have a mac. I also don't know Objective-C or whatever it's called that you use to develop iOS stuff.

What I want to do here is the least possible work to make my game, which should be the simplest possible game, playable by the most people.

Basically my elevator pitch:

I want to make an episodic, Sierra style game that people would play for 20 minutes at a time, on a train. The controls are simple. The timeline of the game is player instigated, which means that you can just drop it into your pocket and you won't die. You just won't do anything. It would almost be a text based adventure except that I really like my animation that I had for ShadowCat, the main character. (He's a D&D character of mine).

So why do I play this game?

I think the game narration is funny.
I like watching the main character walk around.
I'm interested in the plot.

In that order.

So here's what we don't need:

Ragdoll.
High level animation.
High detail graphics.
High fidelity timelining.
Twitch reflexes.
Multitouch.
Accelerometer.

I'm going to do it in a browser. And when I sell it on the appstore I think I'll put it in a UIWebView and STILL put it in a browser.

Why not Flash? Can't be used with the App Store. And that's where I think anybody pays money.

Now the weird thing about this is that it's going to be the opposite of my normal practices. I'm going to use a language that I'm already reasonably familiar with, I'm going to do as little engineering as possible. I am, in short, going to try very hard not to build a framework.

Friday, October 29, 2010

How to databind the index of the element in the backing collection

Here's a quick tip. It's a hack in MeTL, you'll have to fiddle to make it fit your circumstance.

Imagine you've got an ItemsControl. You've got an ItemTemplate which controls rendering, and somewhere in there you'd like to be able to show the index (or the position if you don't want to assume that your clients understand zero indexing) visually. What do you databind and how?

There's no inherent property, and the elements can't access their containing collection (for obvious reasons - they could be in 4000 different collections for all you know - it's a one way relationship).

MyStuff/ShowList.xaml


MyStuff/ShowList.xaml.cs


That's it. The reason I think it's a shitty hack is because you have to set your ElementIndexConverter up before you call InitializeComponent or terrible things happen. I think we've all got enough java reflexes left over that we tend to try to call the default stuff in lieu of the super constructor as the first statement, and might even automatically reorganize it. Furthermore, ValueConverters really feel to me like they should be very generic, and this one is locked down to its constructed collection. Still, it's handy to be able to do this when you need to.

Thursday, October 28, 2010

Well, this got way more complicated than I intended

So the situation is that our app is a lot like PowerPoint. Sorry, the Microsoft Office Presentation Platform View Preferred Platform of View, or MOPPVPPV. Something like that. Or maybe that's my most recent architectural pattern, I forget.

Aaaaanyway, there are some things that make maintaining consistency of mental model with MOPPPVPPV interesting:

1) It's like PowerPoint, so you'd expect to see thumbnails of the slides, to help you decide where to go next.
2) The content was originally drawn from a PowerPoint presentation.
3) All the content in the application is added to the slides after creation time - even the original PowerPoint content.
4) Once the slides have been imported you can add more content.
5) Other people can add content.
6) Some of the content is private.
7) What the HELL does a thumbnail LOOK like?

I know, it seems obvious: "The thumbnail looks like what the slide looks like". But it looks different to everybody. "Well then, it looks the way it last looked to me". Sure, but what if you're wrong? I mean, what if it was white the last time you were there but it isn't now because 200 people went and added content to it while you weren't watching? "Well then make it look how it really is". But a lot of the content isn't visible to you, either because you prefer to ignore that person or because it's private to someone else. "Well then make an arbitrary determination about what it looks like and then move on."

Okay, so. Here's what I'd like:
1) It should represent the content of the slide well enough that I can decide to go there.
2) It should not impose a high cost on the client machine.
3) It should not destabilize the architecture or introduce too much code complexity.
4) It should springboard another piece of development, rather than being a dead end.

So. It looks like the collective public view, accurate up to fairly recently, and invalidated when the teacher moves away from the slide or after enough time has passed. Fair enough? The presumption is that the teacher will be the dominant author, and that content creation will slow in the absence of active presenting.

I wanted to build a little server - just enough to be able to fire up an InkCanvas, paint it with the right stuff, take a picture of it and deliver that to the client. That's 1, 2 and 4 definitely addressed. As to code complexity? Heh.

It's good for not destabilizing because even if it crashes horribly the thumbs just stop rendering. But let's talk about that crashing. No, let's talk first about design.

I want it to be very simple. So, since this is Microsoft land (Mono doesn't have the InkCanvas unfortunately) we've got two options:
ASP and IIS, or...
Well, ASP and IIS.

Shit. And the only Windows server we've got is running 2003 and isn't going to get upgraded anytime soon, and is probably running IIS and ASP about 10 years old. I'm lucky that it's even got .Net 3.

So I desperately dig around and it turns out that HttpListener is basically the perfect answer. It has to run as Administrator, unfortunately, which I really do try to avoid doing. But whatever. We'll figure something out there.

So first take on the problem:


public static void Main(string[] _args)
{
ServiceBase.Run(new ServiceBase[]{new ThumbService()});
}
protected override void OnStart(string[] _args){
listener = new HttpListener();
listener.Prefixes.Add("http://*:8080/");
listener.Start();
listener.GetContext(Route, listener);
}


Look good? Guess what it does.

Nothing? Or were you just yawning? Either way, yeah, you're right. Nothing whatsoever. That first GetContext is blocking. I was cool with that, I figured it doesn't have to go that fast on its first go and since we tend to have really high contention around the caching dictionary I would probably end up taking Write locks all the time anyway. But no.

You can't even START this service. Maybe there's a trick, I don't know. The whole Windows Service development and deployment cycle is pretty much like eating a glass rattlesnake at the best of times. So yeah, Windows never works out that the service has started because OnStart never completes, and we never get to see our pretty thumbnails.

So man the fuck up and go asynchronous. Yes, I know you were burnt so severely by Enumerations in multithreaded code that you still don't have full motion in your right arm. Yes, I know that when you write most of your code in Scala and Erlang handling multithreaded code in any other language feels like fucking a corpse on camera. Deal with it.

Orright, so it's not actually a huge difference:


public static void Main(string[] _args)
{
ServiceBase.Run(new ServiceBase[]{new ThumbService()});
}
protected override void OnStart(string[] _args){
listener = new HttpListener();
listener.Prefixes.Add("http://*:8080/");
listener.Start();
listener.BeginGetContext(Route, listener);
}
public void Route(IAsyncResult result) {
HttpListenerContext context = listener.EndGetContext(result);
if (q(context, "invalidate") == "true")
Forget(context);
Thumb(context);
listener.BeginGetContext(Route, listener);
}


Well that seems fine. I was even feeling brave so I actually included the business logic. Not madly complicated, hey. Sometimes we invalidate the cache. Otherwise we thumb, which means either retrieving from the cache or calculating and then caching.

And that one actually starts! But so does the pain. Here are some snippets:

public void Forget(HttpListenerContext context)
{//Guaranteed to crash horribly once more than one person attempts to touch the cache dictionary. Guaranteed to crash horribly once YOU try to use the enumeration of memoKeys and delete its members at the same time.
int slide = Int32.Parse(q(context, "slide"));
var memoKeys = cache.Keys.Where(k => k.slide == slide);
foreach (var key in memoKeys)
cache.Remove(key);
}//Solution to both problems: Add locking. YAY THREADS! At least a ReaderWriterLockSlim is reasonably simple to use. But now our threading code is starting to outweigh the rest of the app.



public void Thumb(HttpListenerContext context){
var requestInfo = new RequestInfo
{
slide = Int32.Parse(q(context, "slide")),
width = Int32.Parse(q(context, "width")),
height = Int32.Parse(q(context, "height")),
server = q(context, "server")
};
byte[] image;
if (cache.ContainsKey(requestInfo))
{//Will crash if someone else is modifying the cache - there's a hidden loop in there. Yay!
image = cache[requestInfo];
}
else
{
image = createImage(requestInfo);
cache[requestInfo] = image;
}
context.Response.ContentType = "image/png";
context.Response.ContentLength64 = image.Count();
context.Response.OutputStream.Write(image, 0, image.Count());
context.Response.OutputStream.Close();
}//Will crash if the client has closed the connection. Or if we've used them all up. Or basically ANYTHING.


So 10 minutes of erlang twiddling at the command line and we've got this little beauty:


-module(bench).
-compile(export_all).
go(HowManyTimes)->
inets:start(),
lists:map(fun(I)->
spawn_link(fun()->
Invalidate = case I rem 10 of
0 -> true;
_ -> false
end,
Uri = io_lib:format("http://localhost:8080/?
slide=101&width=720&height=540&invalidate=~p&server=madam", [Invalidate]),
{ok,{_,[_,_,{_,Length},{_,Type}],_}} = httpc:request(Uri),
end)
end,lists:seq(1,HowManyTimes)).


I know you know, but I sure do love Erlang. Inexcusable syntax and all. So in case you don't read Awful, here's what that code says:

"Get a camera and take a picture of your nice webserver. Because you are NOT GOING TO RECOGNIZE IT AFTER THIS." You will note that I have called my little module "bench". This is because "rapekit" seemed harder to google at work.

So, fling that into action. And yes, the code breaks over and over in the most hilarious and unexpected ways. To cut a long story short (about 1000 words too late, I know), here's the final code for a (moderately) survivable quick (ha!) C# web server:

Oh. My. God.


Damn. That's a lot of plumbing for a quickie web service. Maybe I should have gone ahead with IIS 4.

Or maybe that's just crazy.

Monday, October 25, 2010

Scala Option

EVERY SINGLE PERSON on the internet has written about this. And me!

The thing is, I didn't fully understand most of the stuff that was written about monads. Still don't - but I wonder if that's not because it's sufficiently weird that people just try to cram their own metaphor onto it and then we're dependent on having consistent mental models.

Which we don't. Read any code from anywhere and tell me I'm wrong.

So anyway, Options: Basically, they're Monads.

So anyway, Monads:
[shrug]

What I'M using them for is doing null safe stuff in Scala. For instance:
private def loadDetails(id:Int)={
  val uri = "%s/%s/details.xml".format(server,id)
  try{
    Some(WS.url(uri).authenticate(username,password).get.getString)
  }
  catch{
    case e: Exception => None
  }
}

I think this is fairly idiomatic. Try something, and instead of launching exceptions out to cause runtime crashes some indeterminate time later, wrap it now nice and tight in something which is well typed to be "This might have a problem. You should check if it has a problem before you try to use it". How is this different to just declaring throws and handles? Well, it's on the value you're passing instead of the receiving location. That's good, gives you both flexibility and strength.

But where I was stuck was in working out what to DO with this list of Maybe(Details). Oops, sorry that's pseudo Haskell. It's a Some[Details]. You typically would deconstruct them like this:

foreach(val uri : uris)
uri match{
case Some(value) => operateOn(value)
case None => someSpecificHandlingOfNotHavingAUri
}

The thing is, most of the time when it comes to specifically handling not having a uri I just want to ignore it and move on. Which led to me writing some seriously bullshit code like this:

uris.filter(_ != None).map(some => some match{case Some(value) => value})

Which is TERRIBLE and difficult to read. All it's trying to do is remove all the nonvalues from the lists. The Somes that are really Nones. And the most annoying thing is that I'm doing Options in the first place because I'm halning Scala, and I'm vigorously opposed to using nulls. But if I HAD been using nulls, it would look like this:

uris.filter(_ != null)

And we'd be done. Want to see those again? No, me neither. It's heartbreaking. BUT SOFT! What light through yonder window breaks! There's hope, friends. And as usual it's because I am a HUGE IDIOT.

See, here's another thing about Monads (it will be the first thing about monads but who's counting? And should they be capitalized or what? They're magic to me at the moment because they're technology sufficiently advanced. So caps until I understand them.)

The thing is, firstly, they're a container. They contain a thing. They are specialized to contain only that sort of thing. So, for instance, an Option[Int] might be Some(3) or None. So you have a way to wrap an Option around an Int. It's called applying. So

option:Option[Int] = Option.apply(3)

and then it's an Option[Int]. And you can unapply it later. In scala you tend to do it through deconstructive matching.

Anyways, the other thing I was completely missing about Options is that they are containers. Wait, that was the first thing. Let me try again. They're lists. Or enumerables. Or whatever you call the thing you can go through from the beginning to the end in your paradigm of choice. Seq? Cool, anyway one of those.

That means that this is a perfectly valid idiom:

foreach(val value : Some(3))
println(value)

And the result is that it prints out 3. Let's have that again? Iterating over an Option released the value. Or not, depending if it has one.

So remember that bloody terrible guy from before? This guy:

uris.filter(_ != None).map(some => some match{case Some(value) => value})

If I weren't a complete idiot it would have read like this:

uris.flatMap(value=>value)

Because flatMap applies a (surprise) flatten and a map. So, for instance, this list of options:

List(Some(3), None, Some(4), None)

mapped to value=>value results in this:

List(List(3), List(), List(4), List())

And flattening that results in List(3,4).

Thus uris.flatMap(value=>value) really isn't that much uglier than uris.filter(_!=null). And is much more scalaish.

(Months later, when I happened across my own blog searching for Scala stuff):

Actually you'd just flatten it. flatMap(v=>v) is exactly the same as flatten, given that the map hof is just identity.

Sunday, October 17, 2010

Wpf progress animation

So. We've done a hell of a lot of optimization work, which basically began by rewriting our Command architectures to work off the Dispatcher thread by default. This naturally induced a whole lot of runtime errors which needed to be sorted out one by one as we identified operations which rightly belonged onthread, manipulating or querying UserControls as they did. However, every single operation which did NOT throw said runtime exception was now operating in the background.

The app is fast now. Maybe 3 times, conservatively speaking. So how do I make it faster? I probably don't. That's actually fast enough. But now that we've got our Dispatcher nice and responsive again it's time to use that and add some perceived speed increases too, so that everyone agrees we must have sped up by at least 10 times. Don't ask me to measure that, it's hyperbole. Which is always six inches long.

So. The best speed increase I can think of which is a low hanging cherry is to add wait animations on all our network actions. That's login, conversation join, slide move. The other stuff is pretty much ok. Maybe file upload too.

So how do we do that in WPF? We could just go and grab an animated gif. Badda bing, done. But I'd like it to be a little nicer than that. I won't be doing a great big complex animation like the gears used to be, but similar principles.

For best value, I'd like this animation to be in the visual tree of the main window, at the front, and collapsed. It's also almost completely transparent, and permits clickthrough. Maybe. Maybe it blocks, we'll have to see whether cancelling out is actually a good idea when it's up.

So.



...All the actual contents





We're not explicitly sizing it so it will fit nicely into its parent. Next: The actual markup to create an animated progress blocker.

Tuesday, March 9, 2010

Longest Golf ever

More golfing. I'm always torn with the golf stuff. It seems like a bad idea to practice writing unreadable code but at the same time it's so much fun and such a good warmup to just attack some fifteen minute problem. Mostly it's fun to deploy to a remote server without access to the error messages and try to figure out how they're running the program and how they're reading the input.

A little of the joy is gone now that I know the stdins and outs of anarchy golf, but the weird little exercises are still there.


-module(keypad).
-export([m/0]).
m()->
io:format(parse(tokenize([],[],io:get_line([])))).
tokenize(Ts,T,[H|Tail])->
case Tail of
[$1]->[[H|T]|Ts];
_->
case H of
$1->tokenize([T|Ts],[],Tail);
_->tokenize(Ts,[H|T],Tail)
end
end.
parse(Ts)->
lists:reverse(
lists:map(
fun([C|_]=T)->
case C of
$0->32;
_->key(C)+length(T)-1
end
end,Ts)).
key(I)->
case I of
$8->$t;
$9->$w;
_->97+3*(I-50)
end.

Friday, March 5, 2010

Fizzbuzz

Everybody does it sometime:

-module(fizzbuzz).
-compile(export_all).
go()->
Ciphers = [{5,fizz},{3,buzz}],
lists:map(
fun(I)->
case lists:foldl(
fun({Mod,Sub},Acc)->
case I rem Mod of
0->Acc++atom_to_list(Sub);
_->Acc
end
end,"",Ciphers) of
""->I;
Match->Match
end
end,lists:seq(1,100)).


Honestly, I was horrified at how long it took me - all of fifteen minutes, probably. Some problems just scream 'use a for loop' to such an extent that it's hard to think through a list based way of working. What I do like about this though, is that I didn't stoop to also hardcoding the case of 15 being 'fizzbuzz'. My solution naturally produces the composite, which if this were representative of some bigger system would probably be preferable.

Monday, January 4, 2010

WPF MeTL Implementation Notes

Well, crazy to expect anyone but my internal team to ever read this so I'm going to to start to put notes down on project specific stuff, expecting that it might be useful to someone who's going mad and running Google searches until their elbows bleed.

Here's a doozy:

We're running UIAutomation testing over our app now, since there are only seven weeks to go and I'm keen to have a high level functional test verifying basic functionality as we patch and tweak.

We had to build entirely custom ink injection, as that's not one of the supported patterns. Apparently in Windows 7 beta (why is this tied to an OS btw?) you can define custom patterns but I couldn't find any information about how to do it and ended up hijacking the value pattern - it's got get/set semantics over a string so it's not a huge problem. In fact, it's a huge advantage because it pushed me into realising that we could use the same messaging structure into and out of the automation framework as we use into and out of the server.

Reuse as much of the existing parse structure as you can, of course.

What's the doozy? ItemsControl does not expose a default AutomationPeer, but ListBox does. I've gotten so used to these two things in particular being only slightly different from each other that it completely took me by surprise. This, I think, is my major criticism of the Automation stuff, and it's really not their fault - a new dev on my project will change x:Names, or remove them because they're never referenced. This will build, compile and deploy but it will break the tests that can read them as implicit AutomationId declarations.

Thursday, September 3, 2009

First version bench

Transactions per second, measured at the receiving end (50 people sending to 50 peers, 2500 expected). A new batch of messages is only sent once all 2500 have been received, so a single message being lost in the system will result in test freeze and is a failure.

0-0=0
124-0=124
400-124=276
788-400=388
2125-788=1337
2431-2125=306
2890-2431=459
3348-2890=458
3909-3348=561
4164-3909=255
4521-4164=357
4776-4521=255
4980-4776=204
5235-4980=255
5591-5235=356
5846-5591=255
6254-5846=408
6713-6254=459
7019-6713=306
8039-7019=1020
8548-8039=509
9211-8548=663
9362-9211=151
9362-9362=0
10153-9362=791
11305-10153=1152
11823-11305=518
12742-11823=919
14412-12742=1670
16138-14412=1726
18115-16138=1977
19696-18115=1581
20162-19696=466
20526-20162=364
20886-20526=360
21167-20886=281
21167-21167=0
21167-21167=0
21167-21167=0

As you can see, we approached desired speed for a bit but then fell back down and froze. No further work was accomplished for the remainder of the test. The auditing profile looks like this:

Snapshot[period=10000ms]:
Threads=4 Tasks=10632/11837
AverageQueueSize=32.67 tasks

Snapshot[period=10000ms]:
Threads=4 Tasks=3281/4837
AverageQueueSize=137.71 tasks

Snapshot[period=10000ms]:
Threads=4 Tasks=1074/2369
AverageQueueSize=226.95 tasks

Snapshot[period=10000ms]:
Threads=4 Tasks=36/116
AverageQueueSize=0.87 tasks

Snapshot[period=10000ms]:
Threads=4 Tasks=35/118
AverageQueueSize=0.87 tasks

Snapshot[period=10000ms]:
Threads=4 Tasks=34/115
AverageQueueSize=0.88 tasks

What that profile suggests to me is that the baseline 35/35 maintenance tasks, which run every ten seconds on an idling darkstar instance, are completing, and all the others are finding it impossible. They might be always taking more than 100ms, or they might be all grabbing for the same resources. While this does merit more examination, I'm excited to try the server revisions suggested by the PDS forum, so am implementing and benchmarking that now. Stand by...

Darkstar Data structures comparison



This is lifted from a live discussion at the Darkstar forums. I thought maybe it deserved a more structured housing: Original discussion

Application description:

It's like a collaborative powerpoint presentation. There are many slides and each of the slides has a large number of users who are contributing content to it. When a user is on the same slide as me, that user sees the content I contribute. This is implemented with a channel for each slide. There is also a global channel on which a ping is sent to you each time any content is contributed, even if you are on a different slide from the originator. When you move to a slide it is necessary to obtain all of the content history from that slide, as well as to become receptive to all of the new content from that point until you leave.

Our load testing involves hundreds of simulated users moving in tandem from slide to slide. This means that at every new join they are contending heavily for the archived content.

Description over.

We've been trying several ways of optimizing these structures because it turns out that contention is basically what kills Darkstar performance - when two transactions go for the same object in the datastore, one of them has to cancel and try again later. If it was itself already the proud holder of the locks on some other objects that locking is wasted and it all has to happen again later.

Second attempt:


This one involves cloning the keyset from the map of users for that slide. I'm cloning them because I theorize that I can release the ManagedObject faster that way - as soon as I exit my transaction I will have put it back in the store, whereas if I passed the keys literally they might still be locking until the subtask released them. I could be wrong...

At each stage of these moveSlide is the first server call post message switching.


private void moveSlide(String destination, ByteBuffer message) {
Integer.parseInt(destination);//Just checking.
ClientSession session = session();
String key = session.getName()+"@"+destination;
ScalableHashMap slide;
try{
slide= (ScalableHashMap)AppContext.getDataManager().getBinding(destination);
}
catch(NameNotBoundException e){//If this slide does not previously exist
slide = new ScalableHashMap();
AppContext.getDataManager().setBinding(destination, slide);
}
for(Object userObject : slide.keySet())
{
String user = userObject.toString();
ScalableList userData = (ScalableList)slide.get(userObject);
AppContext.getTaskManager().scheduleTask(new StartDistributingUserStrokes(userData.toArray(), destination, user));
}

AppContext.getChannelManager().getChannel(GLOBAL_CHANNEL_NAME).send(session,
toMessageBuffer("/MOVE_SLIDE "+session.getName()+" "+destination));
}


So, for each cloned key spawn a subtask which does all the work of StartDistributingUserStrokes. As you can guess, that distributes the user strokes:


private class StartDistributingUserStrokes implements Serializable, Task
{
private Object[] allWork;
private String slide;
private String user;
private int packageSize;
private StartDistributingUserStrokes(Object[] userData, String destination, String author) {
slide = destination;
this.allWork = userData;
user = author;
packageSize = allWork.length;
}

@Override
public void run() throws Exception {
ClientSession session = session();
if(allWork.length!= 0)
{
Object[] strokes = allWork;
session.send(toMessageBuffer(strokes[0].toString()));
Object[] remainingWork = new Object[strokes.length -1];
if(strokes.length > 1)
{
for(int i =1; i < strokes.length; i++)
{
remainingWork[i-1] = strokes[i];
}
}
AppContext.getTaskManager().scheduleTask(new StartDistributingUserStrokes(remainingWork, slide, user));
}
else if(allWork.length == 0){
session.send(toMessageBuffer("/ALL_CONTENT_SENT "+slide));
session.send(toMessageBuffer("/PING "+user+" "+slide+" "+packageSize));
return;
}
}
}

Wednesday, August 5, 2009

WPF Ribbon

Licensing. Nuff said. Crazy amounts of paperwork, especially if you have to join the whole MSN passport thing at the same time. I want my life back please.

Anyway, here's the first thing I've found out about the ribbon:
If you make a RibbonGroup without anything in it a NullReference will be thrown with no explanation at all. If you're iteratively inclined like me you'll hit this and be confused. Be advised!

Tuesday, July 28, 2009

Clojure and lisp

I guess this isn't really a gotcha but I seriously just spent four hours staring at a piece of code which seemed to me to be perfectly correct and just wouldn't behave in the weirdest way. This is the load tester so far:

(import '(com.sun.sgs.client.simple SimpleClient SimpleClientListener))
(import '(com.sun.sgs.client ClientChannelListener))
(import '(java.util Properties Timer TimerTask))
(import '(java.net PasswordAuthentication))
(import '(java.nio ByteBuffer))
(def unicode "UTF-8")
(def package (let [s (slurp "client.clj")](str s s s s s s s s s s s s s s s s s s s s s)))
(defmacro counter [label limit]
`(let [count# (ref 0)]
(fn[]
(dosync (alter count# inc)
(if(=(rem(deref count#) ~limit) 0)
(prn ~label " " count#))))))
(defn props[]
(let [properties (new Properties)]
(. properties put "host" "localhost")
(. properties put "port" "1139")
properties))
(defn decode [buffer]
(let [bytes (make-array (. Byte TYPE) (. buffer remaining))]
(. buffer get bytes)
(new String bytes unicode)))
(defn encode [string]
(ByteBuffer/wrap (. string getBytes unicode)))
(def received (counter "Received" 500))
(def requested (counter "Requested" 25))
(defn client [i]
(let [username (format "username%s" i)]
(new SimpleClient (proxy [SimpleClientListener ClientChannelListener][]
(loggedIn[] (println (format "Logged in %s" username)))
(loginFailed[reason] (println "Login failed: " reason username))
(getPasswordAuthentication[](new PasswordAuthentication username (. "unusedPassword" toCharArray)))
(receivedMessage([message](prn(format "%s Received message" username)))
([channel, message](received)))
(joinedChannel[channel] this)
(leftChannel[channel] (println "Left channel"))
(disconnected[graceful? reason] (println "Disconnected: " reason graceful?))))))
(defn publish[who what]
(do
(. who send (encode (str "/PUBLIC_MESSAGE " what))))
(requested))

So as you can see, quite succinct given what it does which is fairly powerful. Also a few bits that definitely need improvement (that construction around slurping in the file and building it up to be big enough is retarded), which I'm taking a bit at a time. The most recent revision was to build the counter macro, which is this bit:

(defmacro counter [label limit]
`(let [count# (ref 0)]
(fn[]
(dosync (alter count# inc)
(if(=(rem(deref count#) ~limit) 0)
(prn ~label " " count#))))))

Pretty complex looking, at least to me. It's job is to supply me a function which will increment a thread safe counter, printing out a defined message when the counter hits a multiple of a particular value. This is to minimize the flood of output that tends to characterize my naive solutions.

What was the thing that took four hours? Surprisingly, it wasn't the macro itself. They're fairly logical - just splicing values in at compile time, into code templates. If you've done pretty much any kind of interpolation - XSLT, Velocity, whatever - you've done this before. I don't deny that it gets complicated when macros refer to macros, but it still seems that if you're careful it just works out ok. Note, for instance, that I've deliberately declared my let binding for the actual count integer to be gensymed. (For the non lispers out there, gensyming is telling lisp to find me a symbol which can't possibly conflict with or redefine anything already in existence, because it's from a unique and nonreachable namespace). This, as per the parenthetical, means that it won't conflict. Which is nice.

The thing that took me four hours was that this:
`
is not the same as this:
'

And only one of those actually means quotation. Bugger.

Wednesday, July 22, 2009

Darkstar Clojure together at last

Well, I'm on holiday so I'm not writing code... Much code.

Current situation is this:

I'm still auditioning Darkstar, which is now at the point that I need a convincing approximation of the current server API in it. Fairly close now. To be fair, it was pretty much 90% there just with core operations... If you're just writing a message router it's pretty much all in. Still some questions to be answered through experimentation; things like:

  1. How reliable and timely is the disconnect awareness if a user's channel goes down? Is it only picked up the next time the server tries to exercise that channel? Is there an implicit heartbeat? Is there a disconnection event from under the covers? How reliable is it?
  2. How easy is it to access the underlying data structures? I frequently need to audit our data records and SQL or similar would be nice. Otherwise it's going to be very hard to pass this off when we're done prototyping.
  3. If a user reconnects under the same identity, will the system reattempt previously failed transactions to that user? Sort of like a poison message queue, I guess, that retries when there's optimism that the situation might have changed is what I'm after.
So anyway, answering these questions isn't so much a matter of coding. What's occupying me at the moment is a different set of constraints:

I'm running the Darkstar server on Ubuntu, and running our eventual client from WPF at this point. Mono doesn't particularly run WPF, certainly not 3.5 SP1 which is where a few of our core features sit, so the client needs to run on Vista. I could carry two laptops around all the time (and last time Stu and I hacked on the client we just used his laptop for the client and mine for the server; was fine), but that's not sustainable or convenient. In particular, it makes it hard to connect them when there's no network whereas that's not an obstacle for client/server on the same machine.

Darkstar's pretty agnostic - there is some undetermined specificity about connecting to the server but once you're in it's just bytes on the channel. No strong contracts, no typing.

Not having investigated that aforementioned specificity I can't touch it with Erlang yet, but there's a whole bunch of Java clientry in the basic distro. Only problem is Java gives me hives, and I'm already writing it on the server. The current piece of work I'm undertaking is:
  1. Very thread distributed.
  2. Highly message oriented.
  3. Highly flexible.
I wrote a C# client for load testing which embodies hundreds to thousands of client instances connecting to the same server. Was quite inflexible, although still better than Java would have been because at least I could hotswap behaviours through first class functions easily. I figure with all the available JVM langs out there it should be possible to come up with a solution to this loose spec:

Many clients connect to the server. Their behaviour is highly configurable at runtime. When they undertake actions which should affect other members, that expectation is shared with those members, who coordinate with the sending client in correctly timing and monitoring the result and status of the actions.

I figure that's an interesting sort of interaction topology - there's direct message passing happening in the client cloud, and a single duplex channel to the server for each client. I'd love to do the first type of interaction in Erlang, and the second might as well be pretty much anything (although, to be fair, Erlang is also pretty shit hot at byte marshalling).

So what have we got in the pot now? Erlang style message parsing, direct Java interop and hopefully fairly easy to modify and extend. Clojure? Why not... I'm playing with Lisp at the moment and it certainly seems like the pick of the field for extensibility. How's its interop?

A bit like this:

(import '(com.sun.sgs.client.simple SimpleClient SimpleClientListener))
(import '(com.sun.sgs.client ClientChannelListener))
(def listener (proxy [SimpleClientListener ClientChannelListener][]
(loggedIn[] (print "Logged in yay"))
...All the other methods you feel like implementing from either of the two proxied interfaces
))
(def client (new SimpleClient listener))
(. client login (new Properties))

That doesn't work, of course, because you need to set up the properties with the actually relevant properties. Might need some kind of wrapper to let me pass key value pairs to a faux constructor, or that might already be in the Java interfaces - I haven't checked yet.

Anyway, should we have a look at the above code? Seems fairly self explanatory, doesn't it? Yay! I'm a smug lisp weenie! At last!

The first two lines should be clear enough - basic Java import statements, with the exception that having specified a package you can list as many classes from it as you like. This is a subtle sort of benefit of postfix syntax, that you can often change an infix idiom to be infinitely extensible without rephrasing.

(def listener (proxy [SimpleClientListener ClientChannelListener][]
(loggedIn[] (print "Logged in yay")))) is loosely equivalent to:

SimpleClientListener listener = new SimpleClientListener(){
public void loggedIn(){
System.out.println("Logged in yay");
}
};

But with one major difference: The Clojure proxy is implementing two interfaces without any extra work needing to be done. To do this in Java I'd need to create a new file for a specific interface, extending both SimpleClientListener and ClientChannelListener. Let's call it SimpleClientChannelListener, which is more dignity than it deserves. A few more of those and nobody will ever be able to navigate through my little project.

(def client (new SimpleClient listener))

I genuinely do think that this is sufficiently similar that it doesn't need any explanation. I'll just point out that although the lisp version has more parenthesies it is shorter and has much less syntax than:

SimpleClient client = new SimpleClient(listener);

The last one, which is interesting in the way it directly maps java idiom and syntax into lisp, is;
(. client login (new Properties))
which means
client.login(new Properties());

Interesting, isn't it? Once you've made the small acclimitization away from infix, it even reads the same (Java still has one more character).

I'm going to stop the Java/Clojure comparison at this point because this was really the only section of the project for which it was remotely fair. Once the clients start messaging each other to notify send and request receipt, Java will turn into a terrible mess of cross-threading and I don't want to write it even for demonstration purposes. Nor would it be simple to extend or modify.

Wednesday, July 1, 2009

I'm having a race!

WCF: 200 transactions per second. That's about... 50000 times too slow. Ya, it's going to need to go pretty fucking fast if we're going to pay for it out of our own pockets (gulp).
Darkstar: ? (Says it's about behaviour not speed. Nevertheless I'm optimistic because of their in memory data store and - fingers crossed - seamless horizontal scaling).
Erlang OTP: Nuff said. Have to write the whole thing from scratch, but I can't think of a better friend along the way than Erlang.
Ejabberd: Mebbe. Seems to me that the messages are too big - we tend to rock somewhere between 30 and 200kb.
C# raw sockets: Benchmarking soon, only just finished it. Took longer than I expected, there was plenty of shittiness, and I'm going to have to experiment with some assembler jockey style way of flushing a byte[] instead of reallocating a new one.

Ready and set and go now!

Cross platform inkcanvas yay


Well, you probably figured I was full of shit. I did, and I have first hand knowledge. Regardless, I've managed to overcome it for a bit, and this is a Java+JOGL+JPen InkCanvas, WPF styles. First things, first, the proof in the pudding:

Here are the ways in which I discovered that I am very stupid during the execution of this project:

  1. My trigonometry is for the birds, even worse than my ornithometry. It took about four or five days of Stu and I fiddling to discover that while we may have some clue about the basic principles there's a whole black hole waiting in arctangentry. Once we figured out what the problem was and started to google it, the first result was about atan2 (found in all good programming languages near you), and substituting that for atan basically fixed all the problems straight away.
  2. It still doesn't look quite as nice as WPF. Fairly close though. We're polling 90 times per second with about a 1% overrun, so the machine's capable of handling it. I'm doing simple quads, so if you zoom in far enough it's all chunky angles instead of nice smooth nurbs or whatever. WIP, bite me.
  3. Texturing is going to be a timesink like no other. Not just executing it (although that's a fairly new area), but tinkering with it - basically I just want the ink to bleed a little at the edges of each stroke for greater realism. Trouble is, that makes alpha blending important and I think it might be slow.
  4. I have absolutely no idea whether it's faster to do my own trig on the CPU or glTranslate such that I always draw the quads pointing in the same direction and let the coordinate system work it out.
Welp, that's where I got to. Just figured I would write it down so that when I accidentally delete my code there'll still be some evidence somewhere.

ps. I just read the previous post and realised I should fill in the blanks: I'm auditioning DarkStar as the server architecture (which is why I'm fiddling with Java and JOGL), and am going to fiddle with using gluegen to link Awesomium into Javaland. No more C, unless there's a genuine performance reason (Java in the 21st century pretty much sneered just then and told me that there wouldn't be). Whatever, Java. I love your virtual machine, but man do I hate fucking typing you in. Even Netbeans joy doesn't make up for your carpal tunnel causing verbosity. Sidenote: I am a horrible person for laughing that Gosling has carpal tunnel. But man, is there poetic justice in this world, or what? I bet Rich Hickey doesn't have carpal tunnel. Aaaanyway... Sorry, James, if you're listening. I like your Hotspot compiler. I just wish you hadn't backed off of decent closures.


Saturday, June 20, 2009

OpenGL InkCanvas coming right up

Well, it's part way there...

What I've got now is a cross platform system:

Erlang launches and links in a port driver, using a slightly modified ESDL.
The window is created in pure OpenGL, with SDL handling the event callbacks.
Every event gets passed back up to Erlang, where strokes are collated.
Erlang sends accumulative drawing instructions as the pen moves, then does a full redraw when the pen lifts.

That all works, and is nice.

The main question now is, how can I make my ink look as nice as microsoft's? They've put a lot of work into theirs, and it shows.

Here are my thoughts:

Ink quality is a combination of: Pressure representation, ink flow representation and fluid edges.

My first try involved drawing GL_LINE_STRIPS, thickening the line segment according to the pressure at the end vertex. That looked... Crude, to say the least. It was nice and fast, but quite aliased. Turning on antialiasing for ink DEFINITELY didn't solve the problem, and it looks like most graphics cards might be deprecating glLineWidth. Apparently all OpenGL compliant devices must support a width of 1, the rest is hope. Nevertheless, I kept plugging on that for a bit, went through building a gaussian fragment shader to soften the edges. Only a 3*3 kernel, I must admit. Helped a little, and gave a nice impression of blue ink in the way the color suddenly varied wildly across the stroke, but didn't make it much less jaggeddy.

I also tried drawing multiple layers of lines, for a sort of transparent bleed effect at the edges. Looked ok, but was still not antialiased so suffered quite badly from being either 1 or 0 pixels extruded from the overlying stroke.

The current attempt involves taking the original line coordinates and building quads around them. The advantages of this are:
They're hardware accelerated in a way that lines might not be.
They can be textured. Lines cannot.
They don't suffer from the aliasing problem in the same way, and have float precision angles. This makes for much nicer sides.

Problems with it?
It's hard to calculate, so as we get hundreds of strokes on the screen it might slow down badly.
At the moment there's some cockup with the trig where a particular angle (ENE or SWS on a compass) has visible end swapping - the outer line becomes the inner and vice versa.

We'll have to fix that. Stu will have to fix that, in fact. He's my trig monkey, and much loved for it.

Screenshots coming up, depending whether there are any convenient tools for clipping in Ubuntu.

Oh, btw I just found out that Awesomium hasn't released its Linux port yet so might have to move primary dev of this either back to Windows or to mac. Probably mac. It just seems crazy to try for cross platform but develop on windows.

Thursday, June 11, 2009

You're going to think I'm an idiot, but...

I thought I'd put down for my future reference some things I learned today about C...

I've been in high level languages pretty much my entire career. I learned Java first, even built a game and game design tooling in Swing, so I've been in the wars. Since then Ruby, some Smalltalk, lots of JS, lots of C#, some Python, some Erlang. Probably a couple others, if I really think about it. Point is, those are all high level languages - automatic garbage collection and quite a bit of protection between you and the computer.

Not that I'm hitting buffer overruns and all the other classic nasties yet, mostly because of the circumstances of my encounter with C.

Basically I'm adapting existing embedded driver code from ESDL - taking the bits I want because my API's going to end up running to about a page and I'll do all the OpenGL work in C. This means that there's quite a lot of code there, written by pretty good C programmers - some of them the original ddll:_/_ authors, some of them Dan Gudmunsson and whoever he was working with for Wings3D. They're doing things that seem to me to be pretty clever, but some of them just flew under my radar the entire morning. For example, it was a great surprise to me when I eventually noticed that you could do this:

#include whatever
#define aCollection = {
#include stuffInTheCollection
}

Which works because #include is a literal compile time macro, which inserts the content of the file directly into the source (maybe not literally, but that's what I'm guessing - it certainly behaves that way). If you're not looking for that sort of thing to happen, you just overlook it over and over, because that part of the file is just usings and imports, and of no great significance unless you're trying to disambiguate where something came from. Which still happens - a LOT - and is bloody hard to work out sometimes. I haven't yet seen any really bad ones, but I reckon if you were overly clever you could string together a bunch of macros and it would be literally impossible to find any trace of your referenced functions in the source code. Grep would be useless and so would I.

Oh, the other interesting thing about the above code is that it implies that this is a valid header file:

{1,2,3},

trailing comma intended. Very grating if you're coming down from something like Java which just doesn't tolerate syntactic malformation at all (except for itself, but that's another story).

So as far as I can guess, C is just basically a huge undifferentiated clump of functions - sort of mash everything in every included file into one file and you'd have the contents of your binary. Sounds obvious, when you put it like that, but I keep looking for objects and scopes and namespaces and stuff - containers to help me look for my functions. They're not there.

More soon, probably (and eventually I'll finish off the silver standard NewLisp InkCanvas and post that too.

Wednesday, June 10, 2009

It is indeed pretty superficial

Looks something like this:

(Apologies in advance for what blog formatting does to the erlang syntax).

-module(canvas).
-compile(export_all).
-define(WIDTH,640).
-define(HEIGHT,480).
-record(sdlmem, {type, bin, size}).
-define(_PTR, 64/unsigned-native).

go()->
setup().
setup()->
case catch erl_ddll:load_driver("../priv", "sdl_driver") of
ok -> ok;
{error, R} ->
io:format("Driver failed: ~s ~n", [erl_ddll:format_error(R)]);
Other ->
io:format("Driver crashed: ~p ~n", [Other])
end,
Port = open_port({spawn, "sdl_driver"}, [binary]),
register(esdl_port, Port),
cast(21, <<0:32/native>>),
io:format("Connected to C through Port: ~p~n",[Port]).
cast(Op, Arg) ->
erlang:port_control(esdl_port, Op, Arg),
ok.
call(Op, Arg) ->
erlang:port_control(esdl_port, Op, Arg).
send_bin(Bin) when is_binary(Bin) ->
erlang:port_command(esdl_port, Bin).
send_bin(#sdlmem{bin=Bin}, _, _) -> send_bin(Bin);
send_bin(Bin, _, _) when is_binary(Bin) -> send_bin(Bin);
send_bin(Term, Mod, Line) -> erlang:error({Mod,Line,unsupported_type,Term}).

That's hardcoding every constant that they generated for esdl, and inlining a whole bunch of stuff that they rightly had in separate files. I'm thinking at this point, though, that there's no point erlang being the one to drive the gl. For one thing, all the other devs on my team are going to want to have documentation resources available to them for the gl work they have to do, and it's all in C. All of it. Which I guess makes sense because as far as I know OpenGL itself is in C.

So. Design decision now is, do I have a high level or a low level API for my erlang stuff? I really don't want Erlang specifying vertices, for instance. I'm much more inclined to build a detailed stroke object with color, pressure, timestamps etc., serialize it down to C and let it throw away the information it doesn't care about (which isn't much really) as it turns it into GL code. The cool thing about that would be that if there's another rendering engine which isn't OpenGL (say, for instance, a JS implementation in-browser or a superfast DirectX one), the erlang client wouldn't need to change anything.

Yah, that's pretty much the decision made for me, I think. Although I do think that I might accept more detail than I send. Maybe the API would look something like this:

C Rendering front End
---------------------
Stroke begun(Point)
Stroke point(Vertex,Pressure etc)
Stroke ended()

Erlang Client
-------------
Render(Strokes, Children, Whatever else I can think of)

Anyway, that's a start. And a lot simpler to deal with (although much less powerful) than the whole big ESDL kit and caboodle. Much credit to Dan for his work, just not worth the porting effort for the 1% I'm actually going to use.

Damn.

Well, I give up on the way I was doing it. The problem isn't SDL 1.3 at all. The tests accompanying that are fine. It's this effort to port ESDL over to 1.3 for the bits I need that's going nowhere. I'm not convinced it's the way I want to do it anyway.

Now, I don't know much about this stuff, but wouldn't it be cool to have a rendering model where every erlang process took responsibility for maintaining its own render state? Maybe? Sort of objects in the old sense, along with visual responsibilities. I guess I'm thinking a sort of Smalltalky thing - although in my mind Smalltalk is very MVC?

Anyway, superficial glue C code coming up, instead of a voluminous ESDL port that I'd never use most of.

Tuesday, June 9, 2009

This is a fun day

It's been a bit of a long haul...

The mission is to represent arbitrary graphics - stylus ink, primarily, and images. It will ideally be in OpenGL because I'd like it to be cross platform and because I want to use Awesomium as a render source when we get more advanced. The programming language of choice is Erlang, because that's what the server architecture is in and I'm experimenting with doing a sort of distributed grid thing instead of classic client/server.

Basically there are two things that need to be done:

1. Prove gold standard ink from a pen - that is, receive and represent pressure information with sufficiently timely polling to have a high fidelity reproduction of the user's movements.

2. Render that to an OpenGL surface, preferably with a high level API.

The obvious choice is SDL, for all these things. The first proof of concept I knocked up was in SDL 1.2, which worked fine for 2 and for half of 1, but didn't have pressure information coming off the stylus.

Someone went and built a Summer of Code project to bring that information to SDL 1.3 Hang on... Szymon Wilczek. That's who did it. Anyway, it works fine. The problem is I can't get anything to render in SDL 1.3, no matter how I try it. I've tried the compat.c, and working through the texture to surface attachment api by hand, but there's just no output. Maybe it's because I'm on a laptop without a graphics card? I'm not sure. Anyway, I had this code working for 1.2 and it's very frustrating!

Sunday, June 7, 2009

Visual Fucking Studio

Let me count the ways:

You inexplicably stop being able to compile my code correctly until I apply a Windows Update (yah, seriously).

'Publish Now' in properties behaves differently to 'Publish' in solution explorer. Neither is correct - xaml files are not deployed with the app. It's... Hard... To run a WPF front end without any fucking xaml. Not impossible, you know, if I'd done all my development in pure C# all along, but then how would I have enjoyed the thrill of making a value converter do something completely inappropriate like representing an object with a relevant text string (three classes, for some of my members).

You have to run as administrator. Even though PAPA FUCKING MICROSOFT says that applications should endeavour not to run as administrator ever.

Every time I even glance at a xaml file you grind to a whimpering fucking halt for like five minutes, leaving me pounding the keyboard and shouting 'I didn't mean it! Stop!' (There is no option to disable XAML reading).

Resharper (makes life bearable) and ViEMU (makes typing bearable) don't work together - you end up having to hammer escape like nine times every time a tooltip comes up, just to be sure you're back in command mode.

What do I fail at next?


So, that was fib. That's pretty much 'Hello, World!' for functional programming (except for Erlang, where hello world is a complete mapreduce algorithm outpacing everyone but google), so I need a project. Something stupid and inappropriate, I was thinking. Computer game?

Shit, why not? Why not build a platformer in lisp? It's got all the things I like:
I don't know how to do it.
I don't have time to do it.
It's probably a ludicrous choice.
It would make Paul Graham smile.

That's pretty much the rubric against which I assess all my new ideas, and it gets four big thumbs up. So here goes:

Tk or something.

Brb, learning Tk.

Nooo... I'm not learning Tk. I'll just use straight OpenGL instead for the moment. Here's an unmodified demo code:

No, here's a public failure to remember to publish my code. When I get home.

NewLisp as standard lisp? Pshah!

Okay, so there's no such thing as a standard lisp. Every single person in the world has written his own lisp (Snurf, but that's the power of lisp! Yeah, right. I'd be so happy, too, if everyone had his own CPU architecture. Would be great), so there's no real argument for purity or education on which way to go.

I'll stick to NewLisp for the moment. I think it's pragmatic and well documented.

Saturday, June 6, 2009

NewLisp doesn't have Loop?

Damn! It's like the uber macro! Even non lispers know about loop in all its non lispy complexity...

Anyway, I was trying to comma separate the list (and maybe along the way fix the nil problem - although I had actually thought lists always had an empty list in the cons cell - maybe I'm crazy):

I think I'll change convention here, too. Should be pretty clear when I'm writing lisp and when I'm commenting in English so I'm going to drop the ;;comment notation.

Here's a first approximation, after ten minutes hacking. I already know what my initial problem is going to be, by the way. It's not that there are too many parens, there are about the right amount. I just can't put them in the right goddamn place. My eyes sees absolutely nothing wrong with:
(list first(lst) str rest(lst))
and maybe if you're not a lisper yours doesn't either. But that's not how you invoke functions in lispland. It should be:
(list (first lst) str (rest lst))
which actually looks nicer now that it's there. It's just that the first doesn't leap out at me yet. So anyway, here's my first take on join. Fairly erlang influenced in terms of 'just cram all the symbols together and they'll be flattened on output:

(define (joinString str lst)
(let (acc) '())
(doJoin acc str lst))
(define (doJoin acc str lst)
(if(= lst '())
acc
(doJoin (list acc str (first lst)) str (rest lst))))

Which reminds me! I should flatten it!

Actually, I just found the final answer:
> (join (map string (map fib (clean nil?(range 1 10)))) ",")
"1,1,2,3,5,8,13,21,34,55"

Sorry to ruin the suspence, but you can't ignore library functions when they're already there...

Here's a better answer, where the nil cleaning has been rolled into range itself via an auxiliary function:
>(join (map string (map fib (range 1 10))) ",")

Actually, here's one better still, which enhances the library 'join' function:

(define (joinString str lst)
(join (map string lst) (string str)))

resulting in:
>(joinString ', (map fib(range 1 10)))

And that one I'm happy with.

Still, I wish I knew why that damned nil keeps getting in there. If this were CL I understand that it would be the list terminator. But NewLisp says it isn't. Obviously there's something wrong with my logic - but that is a matter for tomorrow. Today I'm going to call a success, even though I eventually did have to trawl the API looking for things like (first aList) and (last aList). Comma separated fib list was produced, and now will never be referenced again :)

Yay blogging

Ya, yay blogging.

So I'm learning lisp today. Read some Yegge, read the comments. Have a good book about macros, currently lost. Can't remember what it's called. Was self published in what was clearly a custom typography. All page numbers wrong, front cover already fell off. That said, good book. Didn't understand what the hell it was about. So, bunch of trawling later:

Number of lisps in existence:
4,908,801.
Number of lisps which clearly indicate within the first 100 pages of their site that they support Vista (actually my heart's in 7 now but I still have my home laptop on Vista):
Clojure, NewLisp.
Number of lispers who apparently care about Windows:
1 (me, and I'm rounding up).

I like the look of Clojure but I figured I'd start in pure lisp and pick Clojure up later on when I need the libs. Pshah, Java libraries. Someone make me a lisp that works on the Erlang VM and I'll be happy. Yah, I know. Virding already did LFE. Again, I'll pick it up when I start doing any serious work. I don't want to be learning Lisp in an environment that's full of gotchas, or where someone has already ironed out the wrinkles.

So. NewLisp.

Here's my first lisping, just firing in the dark into a REPL based on the vague knowledge that there should be parenthesies (if a REPL is as good as I think it is, I should come out of this with an acceptable fib function):

newLISP v.10.0.2 on Win32 IPv4, execute 'newlisp -h' for more info.
>+ 2 2
nil
> + 2 2
+ <40cb15>
2
2
> + (2 2)
+ <40cb15>
ERR: illegal parameter type : 2
> (+ 2 2)
4
;;AH HA! That's how function application works! Let's move on to an incorrectly calculated fib! And figuring out function definition.

> (define fib n 1)
nil
> (define fib(n 1))
ERR: invalid function in function define : (n 1)
;;It seems to think I've tried to pass 1 to n. I guess that's what it does with lists.
> (define (fib n)(1))
(lambda (n) (1))
;;Well, that's an acceptable definition...
> fib()
(lambda (n) (1))
ERR: invalid function : ()
;;And that's clearly not the way to call it.
> fib
(lambda (n) (1))
;;No, I wanted that evaluated. Maybe I defined it wrong.
>(fib)
ERR: invalid function : (1)
called from user defined function fib
;;Oops, did it again. That's calling 1. What if it weren't in a list?
>(define (fib n)1)
(lambda (n) 1)
>(fib)
1
;;Yippee! Well, that's not quite correct as a fibonacci function though... Let's try to move it up to recursion. Baby steps: Define the exit condition, otherwise recurse. Isn't there some special syntax about letrec or something? Let's see...
;;Firstly, how does the if work?
>((if(1)2)
ERR: invalid function in function if : (1)
;;I guess 1 and 0 aren't true and false. Wouldn't it be a full list or an empty list? Maybe? Shot in the dark:
((if('(1)2)
if <408f74>
ERR: list index out of bounds
;;Well, it made the function okay. Was there some problem with that quote thing? I thought that these two were equal: (quote(1)) and '(1). Aren't they? I wonder if I could find out:
...About five minutes trying (eq) and (equal) and (equals) before I blunder into (= 1 1). Now to try out the quote thing:
>(= '1 (quote 1))
true
;;Yay! And I haven't had to google lisp syntax yet! Now, back to the problem at hand:
>(if true 2 3)
2
;;Right. So my halt condition looks something like:
>(if (= n 1)1)
;;Which is hard to test in the REPL because I need to give n a value. How do I give n a value? I have an idea that it relates to let binding, and might look something like
>(let n 1)
ERR: invalid let parameter list in function let : n
>(let (n 1))
nil
> n
nil
;;Well, maybe that was right but it sure didn't produce the desired result.
;;I'll try a different tack.
> (define (finished n) (= n 1))
(lambda (n) (= n 1))
> (finished 2)
nil
> (finished 1)
true
;;And then we'll use finished internally in fib:
> (define (fib n) (if (finished n) n fib(n-1)))
(lambda (n)
(if (finished n)
n fib
(n-1)))
> (fib 1)
1
> (fib 7)

ERR: invalid function : (n-1)
called from user defined function fib
> (define (fib n) (if (finished n) n (fib n-1)))
(lambda (n)
(if (finished n)
n
(fib n-1)))
> (fib 7(

ERR: missing parenthesis : "...(fib 7(\n X\224\""
> (fib 7)

ERR: call stack overflow in function if : finished
called from user defined function fib
...
called from user defined function fi
> (define (fib n) (if (finished n) n (fib (- n 1)))
;;Oops!
ERR: missing parenthesis : "...ib n) (if (finished n) n (fib (- n 1)\200\232\""
;;Okay, I'm officially getting pissed off with not having my brackets visually synchronized. I'm from VI land, we don't put up with this sort of shit.
> (define (fib n) (if (finished n) n (fib (- n 1))))
(lambda (n)
(if (finished n)
n
(fib (- n 1))))
> (fib 8)
1
> (fib 1)
1
;;There we go! Might not be the soundest proof of recursion ever, but it terminated and it came out at the right end. Now let's go for a proper algorithm:
> (define (fib n) (if (finished n) n (+(fib (- n 1))(fib(- n 2))))
ERR: missing parenthesis : "...shed n) n (+(fib (- n 1))(fib(- n 2))\200\232\""
;;That's annoying.
> (define (fib n) (if (finished n) n (+(fib (- n 1))(fib(- n 2)))))
(lambda (n)
(if (finished n)
n
(+ (fib (- n 1)) (fib (- n 2)))))
;;That works for 1 and for 7 but not for two. Finished needs to be redefined:
> (define (finished n)(<> (fib 8)
21
> (fib 1)
1
> (fib 2)
1
> (fib 3)
2
> (fib 4)
3
;;Yay! Fibbonaci! I wonder how I could go about printing out a nice set of fibonacci numbers, comma separated? Firstly, how to print stuff out? Well, it can just be the return for the moment. Secondly, how to comma separate stuff? It sounds like an accumulator function to me. How would... No, wait. This is lisp. I was about to go and google, like, lisp.lang.collections;

Heh.
So... A comma separating accumulator please:
> (define (commaSeparatedCountdown functionToApply numberToCountDownFrom numberToCountDownTo) doCSCountdown () functionToApply numberToCountDownFrom numberToCountDownTo)
(lambda (functionToApply numberToCountDownFrom numberToCountDownTo) doCSCountdown
() functionToApply numberToCountDownFrom numberToCountDownTo)
;;I figure a top level function which takes the parameters and then a low level function which adds in things like the empty starting list which I don't want to specify every time.
;;I just hit a problem: I know there's such a thing as (cons something aList) but I want to cons lots of things together. Is it really as clunky as (cons something (cons somethingElse aList))? Surely there's something nicer than this? Anyway, here's the ugly (I only have to cons twice every time so I'm not too fussed. There should be an aggregation function defined somewhere I would think).
> (define (doCSCountdown acc f from to ) (cons acc (f from) cons(', (doCSCountdown acc f (- 1 from)))))
(lambda (acc f from to) (cons acc (f from) cons (', (doCSCountdown acc f (- 1 from)))))
> (doCSCountdown () '((n)n) 3 0)
ERR: invalid function : ()
called from user defined function doCSCountdown
;;Oh. Apparently () isn't the empty list. nil? Or a quoted list?
> (doCSCountdown '() '((n)n) 3 0)
ERR: list index out of bounds in function cons
called from user defined function doCSCountdown
;;Apparently not.
;;Maybe I'm going about this the wrong way. Baby steps. Let's try just mapping the various fibs into a list, then we can comma separate them later. This would mean being able to create a range (1..10) and then mapping it to our fib function. I wonder if there's already a range function?
;;Oh, incidentally: Empty lists are like this - you have to quote them.
> '()
()
> ()
ERR: invalid function : ()
;;So, back to range.
;;Oh, just worked out let too: (let (a) 1)
;;So now really back to range:
(define (range start end)
(let (acc) '())
(if (= start end)
acc
(cons start (range (+ 1 start) end))))
;;That was pretty cool. It just worked. I still don't understand why the first parameter to let has to be a separate list though. What's the deal with that? Can you assign the same value to several bindings at the same time?
;;Now, the useful thing is going to be mapping to the range. Let's take a quick sample:
...About forty minutes of banging my head against the wall later.
;;Doesn't seem like my range function is that good - it produces
(1 2 3 4 nil) instead of (1 2 3 4). And that blows everything apart. And the only way I've found to not produce the nil is not to not produce it, but to take it out in what must be the hackiest way ever:
(rest(reverse(range 1 10)))
I am not proud. Moving along...
Now it all looks something like this:
> (map fib (rest(reverse(range 1 10))))
(55 34 21 13 8 5 3 2 1 1)
;;Which, if you take out all the grossness with my range algorithm, is sort of okay... Except for it being in the wrong order :)
> (reverse(map fib (rest(reverse(range 1 10)))))
(1 1 2 3 5 8 13 21 34 55)

And that's enough for today.

Okay, one small revision: Instead of letting my code fall apart on the nil, I'll just make finished consider it as another end condition. Now it looks like this:

>(define (finished n)(or nil (< n 2)))
Which gives us:
> (map fib (range 1 10))
(1 1 2 3 5 8 13 21 34 55 nil)

Which is what I wanted all along, except for the commas. I'll do those later.