Thursday, November 15, 2012

Hacker News, Censored

For some reason, HN likes to say " You're submitting too fast. Please slow down. Thanks.".

For some reason it only seems to happen after an account has posted a comment not everybody likes.

Unsurprisingly, yours truly doesn't care about political correctness and will burn when appropriate, thus getting censored by HN ...

Why blog about it ?

Well I believe startups shouldn't be a place to be politically correct, or blind to other people's point of views, especially when they're revolutionary.

If you, too, think HN should *not* be censored, feel free to bump this up.

Anatomy of the Best Programming Language

"A programming language must always enable full control of the underlying machine code in order to maintain full functionality."

"A programming language must be paradigm agnostic and enable any paradigm to be used"

"A programming language must be translated into perfect C until it's native compiler yields equivalent results"

And that is why C >>> all right now

So please, if you're going to create a language, mind those three, and avoid us more of the following:
  • religion wars
  • short-lived codebases
  • gotchas all around
  • shitty performance
  • more religion wars

WIP: don't hesitate to participate, I think we need a few more axioms here, about terseness and other things too.

The Age of Education

We've finally reached it.

Asia is on top of the world, Europe and US are slowly but surely sinking to the bottom of the ocean.

And that's because we lost our last advantage, education.

Our educational system has not only failed to improve over the last thirty years, it has managed to worsen.

We can either focus on education right now, or face the middle ages that are coming, with the western world as a mix of second and third world countries.

In the Western Corner

The US, which had a very bad level to start with and instead of producing smart people relied on imports to fill the ranks, has gotten even worse and seems to have become unable to import talents as other countries are now giving their own incentives to keep the brains from being drained - and it works.

Europe, which never had the brain drain to start with, but used to be one of the best "smart people factory", is slowly but surely losing in quality.

On the other side...

Japan is as good as ever, producing many of the world's top minds (many US nobels are clearly made in japan).

Korea (the good one) is at the top of the world, home to the world's best consumer electronics brand, as well as producing a lot of very talented, hard-working people, Jaedong and the others.

China, which still lacked brains a few dozen years ago, has solved its problem, is now importing knowledge and producing its own smart people.

The Future

I predict that modern companies such as google will increasingly focus on staff improvement and ultimately on the full scope of education in order to create a local and global advantage for them, and through that, for their country.

There will be however a bunch of businesses that will simply move to the now-rich part of the world and leave you to rot.

In my humble opinion, spending less than 30% of our total money on education is a very stupid decision.
And we never did spend that much in all of history.
Time to wake up ?

The First Technical Founder Mistake

I personally am a technical founder with decent knowledge from motherboard to front-end design and that gives me the assurance that I don't know anything and should either learn a lot or take other talented people on board to do most of those things properly.

The Pattern

somewhat technical founders (i.e. with some scripting knowledge, and a bit of database use experience i.e. mysql) know they can reach "a" result by themselves, and rush to do it, without considering the major pitfalls.

They may very well be not the best person to do it.

Quite clearly a better programmer (there's always one) could have written it in half the time while you were securing funding or figuring out another important part of the business.

A well-known example

facebook has that problem, with a mostly PHP and MySQL background brought by the founder and generalized as company tradition.

That makes their IT stack extremely ineffective and at least triples their costs, a major factor in survival probability (burn rate).

I since heard that they eventually at least partly woke up and started considering real tools instead of toys, but clearly they are still suffering greatly from a legacy of bad design choices.

The Impact

Big infrastructure costs > (more) advertising > less users > much less money longer term and possibility of losing that winner-takes-all market.

The Solution

Always be improving, do stuff yourself if you have to but always look up and reach for quality, it's a better long-term strategy.

Tuesday, October 30, 2012

Solving the wrong problem

Tragedy Hurricane : everybody rushes to solve the logistical and human problems related to the hurricane hitting populated areas.

My view : fuck that, let's kill Hurricanes instead.

We have many hurricanes or proto-hurricanes to test on, we have weapons powerful enough to deflect and destroy hurricanes, why the fuck are we still working in fire-fighting mode ?

(for the naysayers: a hurricane is a form of equilibrium, disrupting it in the right way will take it down without spending as much energy as the hurricane holds, current military weapons including oxygen combustion bombs are more than powerful enough, especially if used in coordinated strikes with multiple devices)

nb: I don't say we shouldn't help them, I just say preventing other hurricanes will save many more lives and should be our primary focus. - and of course there is major parallel between this and the "why don't we do anything big anymore" post linked on HN recently. Climate control has been within our reach for decades and we haven't done much in the way of making it a reality yet.

Wednesday, October 10, 2012

Client Side Load Balancing - it rocks

Server-side clustering and load balancing is bad mkay.

It's far too much of a shotgun approach to deliver real efficiency, and it brings its own problems (more load balancing layers, costs, SPOFs, etc.).

How about a simpler, stronger, more scalable approach ?

1. Get your server list !

  • Option a)AnyCast to the "closest" server to receive the local server list (hosting on CDN is probably an option)
  • Option b)The server list could even be loaded with the application
  • In both cases, the list ends up cached in the application, purely IP beyond the first load
(file name = 

 List is of course a JSON string sorted with location (0,1,2) in order of ping distance, trimmed to needs (i.e. maybe only listing 4 servers for each secondary country instead of transferring a huge JSON file for the fun of it), pre-computed for each country and stored in the CDN.

2. Select server from list @ random

function rand(x,y){return Math.floor((Math.random()*y)+x);}//java style random sucks
var index=rand(0,jsonArray.length-1); //JS is readable. for everyone - including liberal arts ;)
var tries=1;
var url=jsonArray[index];

3. If a request fails, client picks another server and retries


4. If three different servers from the closest country (0) fail, the app starts trying with the next one (1), etc.

If too many errors occur, the application contacts any of all the servers it knows to get a new list. If even that fails, it resorts to asking the CDN.

5. When a server becomes overloaded, it starts trashing requests on purpose

And the application retries on other servers, thereby balancing the inevitable imbalance of the probabilistic load balancing solution.

Why it's better:

Simpler, More Scalable, More Efficient, More Reliable, Better Availability - all by design (less components, no central overloaded ASIC, no server-side cluster management, no possible LB failure, potential live redundancy - i.e. API calls sent to multiple servers - across N servers).

Because failover is handled in the application itself, as long as the user has internet connection, you can guarantee 100.00% uptime (as in 100.00% of user connection uptime, the theoretical maximum anyway), without any concern for possible browser stupidity, DNS caching from browser to global server, DDoS, cache poisoning, unresponsive server that looks alive to the load balancer, overloaded servers, etc.

It enables you to deliver REAL availability to your end users because that's the only thing the failover mechanism can actually monitor.

Much better than that, with a properly designed application, the user will at most need to log in a second time  in case of failure, unless login/auth was implemented using some external service like google or facebook, in which case the failure has absolutely no effect at all from a user perspective.

Since DNS is not even used except in improbable cases (accessing the CDN for a new server list in case every single server in the original list failed to reply - something that usually means we got hit by an atom bomb, it's on the other side of the great firewall OR we lost internet connection, something that's of course checked after a few non-responsive servers), we can safely say that the following have no effect:

DNS shutdown
DDoS on someone who shares DNS with you
DDoS on anyone's DNS actually
DNS cache poisoning

And of course, it can be implemented in any web application, web site, application, mobile app, ...

Why it doesn't work like that

This solution looks so much better than what is actually being done that I expect someone to hop in at anytime and tell me why that cannot work.

So please go ahead, enlighten me and I'll come up with a much better solution then ;)

Thursday, October 4, 2012

Hiring the Right People

Those who are "good" at what they do.

Part 1: why I am awesome (seriously, I rock)

I'm quite good.
Because I want to be good.
Because I cannot stand bad.

Part 2: what about the others

Noone is born good.
All the good ones became good.
Because they wanted to be good.
Because they couldn't accept bad.

Part 3: how to find the good ones

I prefer finding the bad ones, usually those that remain are quite good ;)

If they accept bad, you can be sure they're bad.
If they don't want to be good, you can be sure they're bad.
If it's both, they're really bad.

Part 4: building the test

Here, it gets a bit more complicated, because you need someone who's actually good to tell you what is bad and what is good.

That's a bit chicken and egg, but you can always ask me for IT, or spend the time to craft deep psychological tests that cover the two simple axioms I outlined above.

Part 5: How to keep your good people

We know their motivation, it was outlined above, now we just need to feed it.
Help them improve, make them work with good tools and towards good goals.

Then, pay them well (say 2 to 3 times industry standard).
If somebody "good" doesn't bring you at least 5x the value of somebody "bad", he's probably more "bad" than "good".
While this is secondary to treating them well, they have to feel that you recognize their contribution, otherwise they'll be frustrated and move on.

And most of all, don't ever ask them to follow the decisions of somebody who's actually 'bad' and doesn't want to learn (be convinced of the better alternative) - you'll piss them off and it'll end up in non-productivity or them leaving.

Your company can be the awesomeness sanctuary they've been looking for, don't miss that opportunity.

Here are two example tests, that many people will disagree with, but that are nevertheless perfectly accurate.

Example 1: MySQL

Do you know MySQL (y)
Do you use it (n)
Why (because it's broken in many ways and the advertised features aren't actually supported)

If someone says he uses MySQL, and he was NOT forced to do it, you have determined that they can accept bad.
However, they might still be pretty good, if they actually want to be good - and you should settle with that because that's still a lot better than the rest.

That's something you can measure easily, too, by presenting them with mysql's shortcomings and pgsql as a solution.
If at that point, the person says anything like "depends on use case", congratulations you've confirmed they're real bad, don't hire them.

Example 2: Apple OSX

Do you know OSX(y)
Do you use it (n)
Why (because it's broken in many ways and fuck it why can't I maximize a window, what the fuck is that moronic one-button clickpad, I'm losing productivity aaaaaaaargh) - bonus points if they're foaming at the mouth, it means their inner world is incompatible with failure, that's a guarantee of extreme attention to quality .

If someone says he uses Apple OSX, and he was NOT forced to do it, you have determined that they can accept bad.
However, they might still be pretty good, if they actually want to be good - and you should settle with that because that's still a lot better than the rest.

That's something you can measure easily, too, by presenting them with OSX's shortcomings and linux as a solution.
If at that point, the person says anything like "depends on use case", congratulations you've confirmed they're real bad, don't hire them.

This is a test for techies only - using apple products is not a relevant metric for most other fields.

Bad broken tools don't depend on use case, they depend on hospital bills.

Disk lamer

I totally forgot to mention Curiosity, but then it was implicit in the "Do you know ?" part and subsequent questions.

If you were shocked by one of the two examples, please note that I totally respect your freedom of choosing inferior tools, and am merely pointing out their inferiority, nothing more.

As everything that I write, this article *could* be completely wrong.
I just feel that it's right, and as it is quite clearly much better than current hiring processes, you should probably use my method if you want "good" people to work for you.

Wednesday, October 3, 2012

What Really Matters

I want to see humanity progress

There's nothing quite like a man setting a foot on the moon, the first space station, cures for cancer or the revolution of mentalities.

I want to see people become smarter

There's nothing quite like a child learning to talk, walk, read or anything.

There's nothing as fulfilling as actually making a strong impact on the world. 

And what stronger impact could one have than empowering others to make a much stronger impact.

I want to see people become wiser

Karma was understood by Jesus ("father, forgive them ... ") and by Buddha a long time ago and yet some people ignore the obvious fact that life must be respected (because we're life, duh).

Equilibrium was well understood in chinese culture (yin/yang) a long time ago, and yet most people thought it acceptable to allow large scale human activity that did not take into account the need for a complete cycle.

As a race, we must find a way to promote wisdom, or it's going to take infinitely longer for "heaven on earth".

I don't have the right solution yet, but it shouldn't take too long if we all search together ;)

and there I am, working for money so I can buy the freedom to work for those goals -- not bad but not quite there yet. Still, I think I've got a plan :)

Friday, September 21, 2012

install make

apt-get install make

Welcome to 2012, where linux ships in 800Meg distributions that don't even contain make and gcc.

CO2 is not a problem


It doesn't matter how much CO2 you produce, the only thing that matters is that all the CO2 produced is MANAGED in order to return to the initial state.

That means that we can continue burning fossil fuels as long as we plant X trees and build Y CO2 processing plants.


The world is FULL of solutions to all those problems, from planting trees, to directly piping industry CO2 output to other systems, to recycling nuclear waste, to fusion energy and so much more.

So where's the problem ?

As usual, where the power lies, with the people.


That's where the solution is, too.

Friday, September 14, 2012

facebook's HipHop for PHP is a lie

Ever since I heard about hiphop (from a beginner dutch programer's website -- ), I had my doubts about the approach, and my belief was that real Cpp would have been so much better than automatic PHP to Cpp translation+compile.

I of course knew that PHP was half a billion times slower than Cpp (actually just about 31 times), and even much slower than C (35 times says the benchmark), which is also the level of "highly optimized C++" code.

On facebook's HipHop php page, the presentation of HipHop is that it takes 50% cpu of the apache/php combo.

We of course already know that simply pre-compiling the PHP will yield such results (actually far better in the case of something as simple as facebook's web pages) - and the immediate conclusion is that their hiphop compiler is totally unable to bring PHP code to C-like speeds.

We also know that there are other simpler webservers (that can fit the facebook bill) than apache's httpd out there, most of which beat it in CPU and memory usage - the very fact that facebook did not write their own pure C web server tailored to their own needs is a mistery to me, considering the scale and infra costs.

According to the benchmarks currently available on the web, when PHP code is hiphop compatible (some major stuff is not, and the half-compatible stuff will compile but end up much slower), it can see anywhere from the same performance to between 2 and 10 times better performance, i.e. between 4 and 17 times slower than real Cpp.

The other major observation everyone made is that the better memory handling in the compiled version led to memory savings between 25 and 50%.

Not that bad for a facebook hack ;) Too bad it's mostly due to precompile and not to code quality.

If that's what they call "highly optimized C++", I'll make sure to stay away from anything they write in C++, including hiphop.

Otherwise, if you can't be bothered to find programmers that know C-fu, you'll find hiphop or APC can save you a lot on server bills.

Note: I have since then improved the PHP submissions to the computer benchmarks game, and the difference between native PHP and C++ has shrunk considerably. There are however major performance issues related to PHP arrays, and one would expect Cpp to solve that problem entirely (the problems are on the order of 100x slower).

js Templating Engines

Before we even discuss templating engines, I believe it's worth pointing out the advantages of client-side markup generation:

Because your markup generator is cacheable and your templates too,
  • you save A LOT of bandwidth
  • the ONLY thing a returning user will transfer is the new data, i.e. the strict minimum you can transfer to create the page - considering you transport the data in an efficient format like gzipped json
  • you save A LOT of processing power on the server-side (string manipulation is always expensive), spending a bit more on the client-side (not an issue, even 800mhz ARM is much too fast for such tasks)

To take advantage of that kind of stuff, just create js that will echo your html markup, with the parameters it got from json (you can put your json in a non-cacheable include js file, or get it via ajax), and make sure everything but the data is cached.

The only way to improve on that is to make the templating engine faster, and the unique data smaller (think about how much you can cache really -- like normalized data ;) ).

Lastly, I don't use any kind of templating engine, but all my client side except html,head,body,script is js-generated, and it's extremely fast, even on my phone.

On the other hand, I write web applications, not websites, so that might be why I don't care about that kind of crap  - web apps are best built with modules like in flex, which by definition handle the view and thus templating internally.

Monday, September 10, 2012

The Best Programming Language

There IS a best programming language

And we can find it by logic alone - what's the most useful code base in the world ?

Unix kernel

What's the best FPS game ever ?


The holy kernel is written in C.

The holy quake is written in C.

And the winner is C

Why is it so ?

The machine speaks ASM.

To have something humanly readable, we need the thinnest translation layer possible, a layer that would mimic ASM while abstracting the processor-specific instructions.

We have been using two such translations for three decades, and this is not going to change tomorrow.

The first is FORTRAN, which remains the fastest compute language available, barring ASM, and beating C by a tiny margin.

The second is C, slightly slower, but expressive enough to write EVERYTHING in it, including all the other languages - and it lets you include ASM, beating fortran hands down when it matters :).

When does a language become inferior to C ?

The second you remove the possibility to manually direct the ASM output.

There are many languages where it is IMPOSSIBLE to output the required optimized ASM for the task you're trying to accomplish.

There is NO valid excuse for removing that ability, since abstractions are built on top of each other and you can provide the most powerful abstraction along with no abstraction if you so desire.

Like there is NO valid excuse for removing the possibility to manage memory manually. Garbage Collection is a nifty tool, but unless you can turn it off and do the whole thing without, it's not a worthy addition.

As a summary, for a language to be better than C it must :
  • provide the means to be as fast as C/ASM
  • provide access to every feature C/ASM offer, including manual memory management
  • provide higher-level abstractions as well

What about the others ?

I'll take Java and Python, because C# is a bad copy of java anyway and perl is mostly interesting for the regex goodness.

All those languages bring interesting "approaches" (or religions), syntaxes and operators.

But they're NOT in the same layer.

They're above C, they're not ASM translators, they're concept translators.

They essentially could just be C extensions (like Cpp) or C pre-compilers.

So yes, while some languages may bring some concepts or approaches to the front and enable more programmers to use them with ease, they're usually the equivalent of badly implemented C extensions, like the ultra slow ruby for example.

Mobile Development : HTML5 vs Native

That guy seems to think native is better because the web is slow, and while he clearly has a point for games (even chrome is not there yet), he's dead wrong on applications, and here's why.

  1. most applications do not use 3D
  2. the only "slow" thing in the [html5/css3/js] combo is js-loop-type visual effects (like fadeIn and that type of crap)
  3. v8 is more than decently fast for js processing, firefox is meh-but-ok
  4. most android phones today are powerful quad cores
  5. those 100% CPU web apps have very low quality code, and that has nothing to do with the technology

Every application that is not a game can be implemented just fine in html5, and will be plenty fast in most browsers.

Now for the points that "native" does not address:

More than being limited to one OS, native is limited to a few versions of  an OS.

An HTML5/CSS3/JS solution (coded to standards) works -
on a phone
on a computer
on a smart TV
on a tablet
on google glasses

So sure, while some actually design games, and some others can totally afford developing a dozen versions of each application, with a dozen platform specialists to leverage all the platform-specific goodies, the simplest approach remains our good (but evil) friend, chrome.

Less comments, More readability

Comments are technical debt

Like code, they're to be maintained, unlike code they can be totally unrelated to what the processor will do.

Every line that is self-explanatory guarantees a single source of truth, avoids any possible confusion, and minimizes your technical debt.

Unlike code, comments only bring value when the code is revisited, which for so many projects rarely ever happens (I have yet to see code that looks like the 17th revision of refactoring cleanup of doom), and when it happens it covers about 1% of the codebase.

Too many comments

These days, you can find billions of comment lines in programs so simple that comments actually make the read far more complicated than just the bare code.

I can think of many but I recently read a bit of CodeIgniter (a PHP framework), which is so full of comments and empty lines that you need to reduce the line count to less than a fifth to make it readable.

Less comments is more readable

Unless you're using a very particular C or ASM trick (à la XOR var swap or better), which makes sense to explain, comments are useless to any decent programmer.

Let's consider the following code ...

 function sql_match($item,$rkeys,$rcomp){  
           $match[]= "(".$rkeys[$i]." = " . "'".$item[$rkeys[$i]]."')";  
      return array_merge($match,$rcomp);  

I believe we can all agree this is perfectly clear, such a function is so short and so explicit that anyone looking at it will understand it instantly, even many who are not coders.

Would you add comments to this ? why ? what's the point ? It can only get less readable from here on.

Most code does NOT need comments

Let's see with a function that does much much more...

 function pre_edit_actions($p){  
      return specific_pre_edit_actions($p);  
 function edit_item($p){  
      return post_edit_actions($p);  
 function post_edit_actions($p){  
      return specific_post_edit_actions($p);  

Right here, we have a function that will create or update any item, as well as handle pre edit and post edit actions (like associated uploads or more) including creating and removing relationships with other items.

All so short you can't possibly misread them, with every function call or verification clearly named and everything.

I don't think code should be readable for non-programmers and I don't believe any programmer of any language would have any issue reading and understanding those functions under 5 minutes.

I also believe that the vast majority of code being written can be segmented and abstracted to a level where commenting is useless.

Another approach

Since such code, that is self-explanatory and well segmented/abstracted, implies almost zero commenting costs (don't underestimate the cost of writing clear comments), clearly improves code quality, also diminishing the debug and update costs, shouldn't we be focusing on those characteristics rather than frail comments that easily become stale, take at least as much time to express clearly, and add zero value in most cases ?

Do you have some code that still requires comments although it seems to be well written ?

Do you have a reason to make code readable by the non-programmers ?

Friday, September 7, 2012

How to survive Paypal

PayPal is evil

I came across a rather interesting blog post on HN today,

In it, the author explains how himself and others had their paypal accounts frozen for dubious reasons, for extended time periods and without any proper customer support.

Now that sucks, and that warning is PRICELESS.

However, the blog post could be answered with the old standard saying:
Want some cheese with your whine ?
I know, I'm an insensitive asshole, but think about it, if you were facing the same risk, wouldn't you rather have a solution with the warning ?

The problem

  • paypal is the fastest, safest, easiest way to pay - in the eyes of many customers (maybe a majority)
  • paypal can freeze your account for long periods
  • paypal cannot possibly freeze your account indefinitely, as a court would eventually force them to unfreeze it
  • paypal seller support is impractical to fix those issues
So, either we drop PayPal and lose 30% flat of all sales (arbitrary number), OR we find a way to lower the impact of the PayPal system bugs.

And in fact, such solutions have existed for a long while, because it has happened in the past that customers would take months or even YEARS to pay an invoice.

The solution

From the top of my head, one could easily use the following fixes:
  1. withdraw from paypal at regular intervals
  2. have backup paypal accounts in case one gets frozen
  3. use other payment services that enable paypal themselves (so THEY are taking "the paypal risk")
  4. credit cards
  5. other credit solutions
  6. loans
So sure, you will lose time and money on the solutions, but will that really cost you more than losing all those paypal sales because you didn't care that your customers preferred paypal ?

Your call, bud considering option 3, which is pretty cheap, why wage a holy war ?

Last thing, the OP talked about his own payment solution, that sounds nice, but either you get into that business seriously and make a global payment solution, or you'll just be annoying users (and you can be sure I will avoid purchasing anything on your site as much as possible, just because I don't give my credit details to "anyone") and losing sales, and risking way more than just your paypal account balance.

If you give a man a knife, prepare to get stabbed.

Wednesday, September 5, 2012

File Sharing between Linux and Windows

File Server = linux ( or any unix really, except crapOSX)

User = linux (android,ubuntu, others) or windows (ouch. still home to visio and lots of adobe stuff though)

It has always been an issue to connect these two, and as all data should be centralized and kept on at least one resilient raid setup within a file server, we always have that big issue of how to connect the autist windows client with the unix file servers.

The standard way used to be (for me at least) to setup samba, as an open source imitation of the windows file sharing solution is almost perfect (there are quirks, but then it's more than usable).

But then, it's clear that samba isn't that great, and that if we could get everything running under the very standard NFS, it would be great.

Some guys coded dokan / nekodrive to help with that, unfortunately dokan is a buggy piece of crap, and nekodrive is based on it.

Microsoft offers NFS support easily click-installable on entreprise and ultimate versions, but then that's not exactly the license we all get with OEMs.

So I'm currently looking at finding a way to make w7hp access an NFS share

ocsetup does not work.
sua does not work.
cygwin doesn't want to help.

The world hates me, and apt-get install samba. If you're looking for me, I'll be crying under the shower.

Tuesday, September 4, 2012

The first real ultrabook

update: official trackpad drivers now ok-ish, not worth the trouble. Can't find a bios to disable hyperthreading. otherwise really happy about the laptop still. had to cut a micro-hdmi cable to make the micro-hdmi output work (port in laptop too deep).

Been a while since the macbook air started a trend of very thin very light laptops... but so far they were only crap to me.

Sure they were light and thin, but do you really expect me to work in 1366x768 on a MBA or 1600x900 on a zenbook ?

More than that, the only credible competitor to the macbook air, the asus zenbook, had a very bad touchpad and keyboard, or so the reviews said.

And then, ASUS eventually got it.

And then, I bought a laptop.

I must say this zenbook prime hit the spot with the only quality I was looking for in a thinbook (confusion intended) : a high quality full HD IPS pannel.

This screen is just awesome, and although the keyboard feels a little strange, it's more than fine for a laptop.

Just a word of warning though, basic stuff like tap to click has more than 300 ms lag (I call that completely unusable) and should be disabled.

The rest of the touchpad is fine though, except the bottom would've been better without touch sensor, that accidentally moves the mouse when you click, or record it as a second finger or ...

All in all, once I find the touchpad configuration utility, remove the tap to click and find a way to ignore the bottom of the sensor area, it'll be perfect.

Let's hope the other manufacturers wake up and we start seeing more ultratops with killer screens (like an affordable fullHD MBA (although they'll probably call that retina, or cornea... or whatever really) running windows 7 ...).

Oh and, contrary to most reviewers, I don't get free stuff or make money so you can trust me ;)

Obviously the trackpad drivers issue was enough for someone to leave the beautiful screen in the trashcan --

They say a few driver updates do away with the problems, I have always touch typed and I have absolutely no problems, as if the european version which came out later didn't have the same driver version or something.

major edit

Let me correct this : asus update .24 sucks too. The elan pad sucks, there are no usable drivers like what you would get with an old dell / synaptic combo, letting you set preferences like deactivate when mouse plugged in, no solution to have a working click (because the tap sucks, too), overall I feel like writing my own driver with a part to ignore the address range of the button positions, and making buttons register correctly rather than laggingly (measured >100ms,this thing sucks balls) - Let's hope I find a solution though.

fixing the problem

Step 1 : get a usable driver

Step 2 : tweak the driver

As explained here : 

Go to the etd.inf file (the .bak is not important), and select lines 751 to 907, replace all 0 to 1 (that is, in all the [ETD_SmartPadDisplay*] sections) - if you can't replace all in selection, get notepad++

Step 3 : palm contact when typing

Skip if you intend to game with the touchpad on the laptop, this disables touchpad when typing.

There's a hidden feature in the driver, in the ETD_OtherSetting part, that covers disabling the touchpad when you type:

HKLM,%ServiceRoot%"\Elantech\OtherSetting",DisableWhenType_Enable,%REG_DWORD%,1 ; 1 = Disable Function, 1 = Enable Function
HKLM,%ServiceRoot%"\Elantech\OtherSetting",DisableWhenType_AllArea,%REG_DWORD%,1 ; 1 = Edge Area, 1 = All Area
HKLM,%ServiceRoot%"\Elantech\OtherSetting",DisableWhenType_DelayTime_Tap,%REG_DWORD%,1000 ; Unit (ms)
HKLM,%ServiceRoot%"\Elantech\OtherSetting",DisableWhenType_DelayTime_Move,%REG_DWORD%,1500 ; Unit (ms)
HKLM,%ServiceRoot%"\Elantech\OtherSetting",DisableWhenType_DelayTime_Gesture,%REG_DWORD%,1000 ; Unit (ms)

This part is cloned in every ETD_OtherSetting, as for the previous tweak (ETD_SmartPadDisplay) and as I didn't check the id being used, I changed them all this time again.

Once you have set this setting, the touchpad will be disabled "DisableWhenTypeDelayTime" after you started typing and re-enabled "DisableWhenTypeDelayTime" after you stop.

I'm currently toying with the delays, trying .3 for the Time_Move, but evidently that setting is going to be quite personal and related to your WPM and typing density distribution (i.e. if you type a bunch, wait, type a bunch, you risk having the touchpad reactivate during the typing break).

For some reason there is no registry key to handle that,and no file in system32 that contains that string so ... goodbye enable/disable script it seems.

Step 4 : Left and Right click

I noticed the touchpad would consider a right click any click from anywhere on the touchpad as long as the last "new touch point" was in the right half, i.e. keep your finger in the left click position, put a finger on the right half of the pad, click with your left finger and wtf it's a right click.

Another hidden setting in the driver it seems ;) - actually three settings

HKLM,%ServiceRoot%"\Elantech\DriverOption",ButtonMode,%REG_DWORD%,1 ; 1 = Button Combine, 2 = Button Indenpendent
HKLM,%ServiceRoot%"\Elantech\DriverOption",ClickPadActiveAreaClickMode,%REG_DWORD%,2 ; 1 = Not Two Finger Click, 2 = Two Finger Click (Right Button), 3 = Two Finger Click (Left Button)
HKLM,%ServiceRoot%"\Elantech\DriverOption",ClickPadRestingAreaClickMode,%REG_DWORD%,2 ; 1 = Not Two Finger Click, 2 = Two Finger Click (Right Button), 3 = Two Finger Click (Left Button)

I used 2,1,1 instead of the default 1,2,2 , it feels more normal now.

Step 5 : set some options

After running setup.exe, you can now access your mouse driver like civilized users -- I personally did the following but then that's all up to your preferences.

Removed every multi finger gesture (I hate those things anyway).
Enabled "disable when external pointing device plug in".
Enabled edge scrolling.
Pushed PalmTracking to the max.

Right now I think that's all ;)

Cartels, or how corporations and the government gang bang your wallet

Cartel cases and how the mafia states benefit from it.

Prior to the case, shoemakers A and B are the only participants in the market, and sell their shoes for 100 zlitos a pair.

Shoemaker A and B aggree on a price mark of 200 zlitos for a pair of shoes.

This of course is unheard of, as it would've been enough to buy 2 pairs a year ago.

However.. you do need that pair of shoes and you pay for it.

The evil shoemaker (A or B) receives 160 zlitos, and the mafia state gets 40 zlitos (let's assume they have a simple clean 20% vat rate).

Later on, after many complaints, the state will investigate the matter using the zlitos you and every other citizen of zlitland paid in taxes last year.

When that succeeds, shoemaker A and B will both be fined <fine amount> zlitos, which the mafia state will then use partly for your own good, partly for their own good.

So in a sense, what really happenned here is that the state leveraged another tax on the shoes in the following manner :

step 1 : 200 zlitos pair'o'shoes purchase -> 160 shoemaker , 40 state

step 2 : cartel fine  -> -2mil shoemaker,+2mil state
fine amount per pair'o'shoes -> -40 shoemaker, +40 state
(for the sake of the argument, we'll consider it happens after 50.000 pairs have been sold)

step 3 : shoemakers tax on profit -> (120*33%*20% = 8) -8 shoemaker, +8 state
(for the sake of the argument, we will consider a low 33% real profit rate (i.e. before accountance conceals most of it) and a moderate 20% profit tax )

Normal mode -> 100 zlitos pair'o'shoes -> 75 shoemaker (100%), 25 state (100%)

Cartel mode -> 200 zlitos pair'o'shoes -> 112 shoemaker (149%), 88 state (352%)

Now you know why the fines required by the states are always lower than the profits generated by the illegal arrangement...

... and that is what happens when everything else is legal.

On arguments of authority

We often hear the name of the following companies associated with high technological skill and technological innovation:


The following facts


  • is a copy of myspace and others, that succeeded because it spread through a very fashion-addicted layer of the population, the university students - and because the alternatives sucked way too bad to be credible..
  • uses mysql
  • uses a php compiler rather than using proper C/Cpp for speed


  • is a collection of smart market moves, that positionned an inferior OS as the logical choice for many vendors - everyone who used a mac classic knows what I mean, DOS couldn't touch unix and windows 95 was the first time microsoft came close to apple from a visual standpoint
  • is responsible for microsoft server (bad)
  • has every single of its entreprise network services like DNS or AD much better served by linux appliances


  • made ONE single search engine. that's like writing a crawler, figuring out an ok way to store data and take it out and that's it.  Sure they did it right, but mostly they were smart enough to license it to yahoo while keeping their branding and then NEVER MONETIZE it until everyone was using it (an equivalent approach to search with 20% less efficiency would have led to the exact same success).
  • doesn't make that many public mistakes, but then they too use a mysql fork for some of their stuff, android is far slower than it should be on a quad core S3, implying that they too, like every other company, have hordes of average-skilled drones.

Excellent business examples, all three of them, really I admire them. but thinking they have any kind of "guaranteed tech insight" is misunderstanding their business big time.

The most innovative is google by far, and that's because they pay people to try anything and everything, and then pick what's going to market. That is sound business strategy, not tech mastery or whatever the media like to call it.


I know, ranting is bad ...

But then, it cannot come close to the evil that is MySQL now can it ?

I'll take what's known as the "best" MySQL available, mysql InnoDB 5.6 as it's supposedly the most "RDBMS" of all the forks.

the good :

  • it's a database
  • it's used by beelions of people (at least)
  • there probably are half a thousand tutorials
  • it's available on many hosting environments
the bad :
  • slow joins
  • slow subqueries (some are just on the retarded slow side, like 1m)
  • no native FTS, resulting in major WTF on the order of CONCAT LIKE
  • hackish foreign key support
  • no IP datatype
  • bad datetime/date datatypes
  • no ACID compliance IF you use triggers (fk change does not fire trigger...)
  • MANY gotchas (like weird/undefined default behavior)
  • many PHP libraries (mysqli looked more or less alright, mysql is crappy, there are others...)
  • no triggers on views (instead of)

So sure we could go on and on, but right now, if you need a RDBMS and not just an EAV store or in-memory cache, try PostgreSQL, it will do so much more for you, starting with keeping your data consistent.

Y u blog ?

Lots of stuff I want to share.

Need a place to write it down and get it read.

Need a place to structure it and make it useful rather than reactive.

I most probably will be ranting about the fake IT people, the bad tools and other stuff - but then I might also post some constructive shit now and then.

If you disagree with me and have a logical reasoning behind your opinion, please do speak your mind, I'm interested.