Wednesday, December 4, 2013

Moo.com > review

Offering

Moo only offers a few different options for business cards, and that's fine because most people don't want wooden, plastic or metal cards.

A very good thing about Moo is that they offer you to get started with only 50 cards, which definitely helps when you're still iterating on your design or want to try out a company.

Another awesome feature that I did not use is that they allow you to print different backsides on a business card.

This enables you to order cards for 5 (or 50) different persons in one batch, or to add some diversity to your cards, like showing off your designs for designers.

Customer Service

So far, just fine, acceptable delays, very polite and forthcoming.

I expected them to deliver on the reprinting of my cards ...

Well they messed up. The reprint is much better, but there are still cards that have problems (about 10%), and the shipping was one day late, plus one day weekend delay, i.e. I got my cards after the event was over.


Shipping Delays

They were one day late, apparently related to a printer malfunction - I'd have appreciated the communication before asking, but I got an answer fast and the cards as well.


Printing Quality - first try

The 4 paper layers stick together ok, the edges and corners are fragile as expected, but impeccable at delivery.

The paper is fine, the texture looks nice and the touch is good, it doesn't seem as exceptional as they make it up to be on their product page, I guess a paper expert will have much more to say about this.

The cards are very slightly warped, completely invisible to all the people who don't study your card from a distance of 10 centimeters.

The precision is great at a normal distance, but if you study closely, it's not that great, like standard photo printing.

It could be a limitation of the source (they requested 300dpi) image, but since I couldn't send svg I'll never know - and it did not matter for me.

The color (I only used black ;) ) is bad, as in worse than my home cheap laser printer on normal settings.

There is serious white bleeding on the edges (mostly the top) of every card, plus scratches on some.

The white background on the other side of the cards is ruined for all cards but two (the top card from each 50-stack), by the black ink from the backside of the previous card, light smears that ruin the cards.

Printing Quality - second try

The color is still unimpressive, there is much less bleeding on the black backgrounds, and most of the cards had a properly white background.

Nothing else changed from the first try.

Evolution

Apparently, Moo.com had many failures in the past, but have been continuously improving, from unusable software and buggy printing back in 2010, to misaligned cards in 2011-2012, to my color problem today, to hopefully fixing all of that and future problems.

Still, they have a long way to go to become 100% trustworthy printing partners

Wednesday, November 20, 2013

Tweeting during Events : Is event promotion more important than attention ?

There is a definite trend (at least in the startup world) of events that push people to tweet about them.

Startup Weekend is especially involved in that, spending several minutes every weekend to push people to tweet.

Not only that, but there are "tweet screens" distracting the audience during important pitches and other presentations.

I know social media promotion is important, but is it really worth sacrificing the quality of an event ?

What do you think ?

Monday, November 18, 2013

Transcend Communication

I hate cities.

Today, cities exist because human beings need to meet face to face.

They need to meet face to face because every other communication means is an ersatz.

A worthy goal would be to uncover what is missing from the videoconference, and create a new means of distant communication that can transmit that.

Steps


Human Research

The plan consists of a research phase, involving actors, Korsakoff patients and MRI.

Using a variety of communication means and the MRI to analyze the patient's reaction to the actor's delivery through those means, it should be possible to extract information about the missing communication bit.

Actors are the best candidates as emitters because they're the most qualified to express the same feelings with low variance.

Korsakoff patients are the best candidates as receptors because they are the only people capable of being a blank slate every day.

Care should be taken that both emitters and receptors be shielded against parasitic information, and what remaining parasitic state be recorded as experiment metadata (i.e. MRI before starting for both of them).

Prototype Research

Once the perceived difference is observed and qualified/quantified, it will be time to try and understand the nature of the missing communication waves (most probably em, brain waves, but you never know).

I think the best approach to that would be to build a 2x1m "screen" that would of course output light and sound, but also replicate the em input.

And on the other side, you'd have a set of "cameras" capturing video, audio and em input.

Since the missing communication part is almost certain to be transmitted in straight line in the real world, a single 2x1m "em window" should do the trick.

Then, we would once again use the MRI to compare with real-world face-to-face communication.

Money Time

Once a two-way solution is built, this system will enable people to move out from big cities without missing out on job opportunities.
Considering the housing costs in New York or Paris, it is clear that even if the prototype were to cost $1Million, there would still be customers (a company in Silicon Valley could use one such prototype to human-link with an office in Europe where they hired three engineers for the price of one in the valley, for example).

Friday, October 18, 2013

Performance matters

If spending a few days on improving overall performance can save you 30% of your server costs, you should probably do it.

That means 



  • getting rid of your prototyping language for something more fit for production (drop ruby/php, get python or drop java/python, get cpp, or drop cpp get c).
  • reworking your datamodel to optimize transfers, seek times, etc. through caching and pre-computed data sets
  • getting rid of your prototyping datastore for something more fit for production (drop mysql, get postgresql, drop NoSQL as main data store if your model contains more than one different item, drop NoSQL as main data store if the only thing you need is an ultra fast K/V store)
  • getting rid of your prototyping infrastructure for something more fit for production (drop your standard ec2 and get something optimized for your needs, do some profiling, fix your bottlenecks, use physical boxes with loads of RAM and SSDs)


For any big company, it should be simple to make that choice, you may invest a few hundred thousand in research and save 60%+ on a huge budget (like facebook could and has begun doing).

For a startup, it's even more relevant, since that extends your runway significantly, and pushes back the scaling discussion an order of magnitude or two (in number of users) later.

Usual replies:



  • "a language fit for production will reduce productivity" : 

that's never been proven in any way. it's pure OO propaganda.

  • "ruby is ready for production" :

rails had an sql injection weakness (the kind of security hole a 1-month beginner would never leave open), and it's much slower than Java or Python

  • "Python is ready for production" :

it's still changing rapidly, has no stable version that could have been time-tested for security and performance, and we're still talking 4x slower than C at least

  • "MySQL is just fine" :

most of the listed features are buggy, slow and incomplete, and the declared feature set covers maybe 5% of the SQL standard, most of it being incompatible with said standard.

  • "NoSQL is scalable" :

noSQL is just scalable by anyone including those who have no clue at all about databases, and is made scalable by being inconsistent.

  • "the cloud>you" :

go ahead and try to get one million IO per second in that cloud and tell me how much it cost you, I'm willing to sell you a physical box that does that for less than half of your yearly costs

  • "your approach isn't scalable" :

don't you think pushing the limit from 1000 concurrent users to 100.000 concurrent users on the same box is already a good step towards scalability? don't you think it'll make the scale out that much easier (i.e. a thousand boxes instead of 100.000 ? )

  • "your approach is more expensive" :

less servers, less expensive servers, more performance per server, ... it HAS to be more expensive indeed.

  • "<professionals I know> say you're wrong" :

In a field where 99% of the people don't understand what they're doing ? picture me shocked.

  • "facebook is right, you are wrong" :

I'm not telling you blue isn't the best color, I'm telling you a 30mpg car will go three times as far as a 10mpg car on the same fuel.

It's logic and mathematics, it doesn't depend on points of view, feelings or past experiences.
The only thing one could discuss is the actual cost of the performance improvement, but even rewriting all of facebook in pure optimized ASM would cost less than 10% of facebooks yearly server costs.

jQuery live is dead

the new live

So here's how I reintroduced $.live to my jQuery.
Unfortunately, .on(action,selector,func) does not seem to understand anything but the most basic selectors, and that makes it much much worse than the old live, which could handle 

$('.module_container .core_data th.secondary').live('click',function(){});
This, as well as most complex selectors are now impossible to .live() it seems (or maybe my version of jQuery is full of bugs, it's called 1.10.1 or whatever).
(function($){
    $.fn.live = function (action,func) {
        $(document).on(action,this.selector,func);
        return this;
    };
}(jQuery));



live vs on

You should still try and learn .on, which lets you put the delegation on any element rather than the document itself.
If your element has 0 static parent elements apart from body/html (i.e. it is dynamically added to the body or html), live is the correct approach (and having javascript create 99% of your html is the correct approach to the modern web application).
Otherwise, you should be using .on in order to make event handling lighter - the lookup for the handler can become slow if you add a zillion handlers on the same element, which is what happens when you use .live for everything.




event delegation is slow

The reason live and on are slow, is because when an event is delegated using the jQuery method, an event handler is created on the document (or your selector) that will check whether or not the bubbled event target matches the selector.
So what's actually slow is the selector itself, and the very long list of if(match(ev.target,selector)){callback1234();} which is bad design, whether you use "on" or "live".




the solution

If you still hit performance bottlenecks related to handler lookup, you should consider dynamically adding .on handlers to children elements (delegated event delegation), or go all the way and dynamically adding handlers to the elements you create (no event delegation).
It mostly depends on the percentage of events that are actually triggered and the number of different events that bubble up to the same delegated element, but you can have your answer with just profiling.
Actually, $(document).on or live both get you your handlers in every case. $(anything else).on will fail if it's not handled post load, i.e. packaged in a function and called on purpose in .ready().



the best solution

Drop jQuery. It takes 96KB to do mostly nothing. Write your own event delegation and enjoy a freedom that far surpasses both what .on() and .live() ever did.

Learn dynamic event delegation delegation ;) (it's called eventception)

Reinventing the Wheel

Most of my very short career was spent reinventing the wheel (alright, I'm cheating, I've been doing that for longer).

And that kicks ass, because today I'm among the best wheel inventors and builders, and I never have to drive a car with octogonal wheels.

So while you spend your time fixing your car due to shock damage, I'm just driving, and driving, and driving.


In other words, don't be so afraid to check the wheel, figure out that it's broken, and fix it. On the long run (that is, as short as my career), it's worth it.

Wednesday, October 16, 2013

The Retina Bias

It seems most designers have successfully migrated to retina MBP.

Unfortunately it seems some of them have not yet realized that most people don't want to use that computer to surf the web, be it because it runs the crappiest OS of all times, because 2880*1800 is not a logical resolution choice or because it's just very expensive for no good reason.

Here's one example (most websites are like that) of a startup website that suffers from "The Retina Bias"

In Retina resolution:

It's very empty, but then so are most websites on any big screen (from full HD on).




In full HD (the standard for us real tech guys).
You can still see the phone, but the reflection is dead and so is the rest of the page content.






In 1366*768 (the poor man's laptop, pretty much standard as of today). You can guess there's a phone but you have zero idea of the application layout or the rest of the page content.





There are other standard resolutions in between, and all of them will result in a more or less useless combination of waste of space and lots of scrolling. Scrolling is bad. scrolling is ugly and annoying, scrolling hides what you're supposed to be showing to your visitors. IF you absolutely have to scroll, scroll one div, not the whole page. Be sure though that everytime you make someone scroll, a kitten dies.



What you should be doing instead



Basically what I do: avoid scrolling at all costs, try and use more of the page, think about standard resolutions and take the time to make a beautiful design (not like me).

In retina resolution: retina resolution is a bad idea, 4K will be another discussion, until then you can upscale everything with 10 lines of javascript if you want to take them into account (change body font-size) and it will simply look like full HD with more pixels.




In full HD



In 1366*768







In half full HD (windows+left)
















On S3 > android > chrome


















Sure, this website is ugly, but I'm not the guy who's supposed to have worked on his drawing skills, that's the designer's job.
Like it should be the designer's job to make webpage content usable for people without scrolls and without trouble.

In every example I used chrome without any toolbar, i.e. the thinnest chrome you can ever have and the best screen use possible aside from full-screen which noone uses.

The negative effects of bad webpage sizing are even more visible when you have toolbars, favorite bars, not-chrome or an Apple OS that thinks you shouldn't ever maximize your browser window unless you want to do that manually.

So please all of you MBP-retina-biased designers, design stuff that the world can use, not just people who bought an MBP retina and an iPhone7.

Fight vertical scrolling, kill page scrolling and please don't kill our bandwidth with retina size graphics.

If you want to think Retina, use svg, not huge static files.

Monday, September 30, 2013

Warranty in Europe (and the world): Don't listen to the seller

Today I went to Aldi to get two broken items refunded (both broke in a few months and both of course benefit from the legal two-year warranty).

I got told that:

  1. After one month, Aldi doesn't take care of it anymore and you should call the manufacturer
  2. Without ticket there will be no refund
I found that rather unlikely to be legal and did a little research...

It so happens that:
  1. The warranty is always an obligation of the seller
  2. The legal warranty exists even if the ticket is gone, and any way to prove the purchase origin should be accepted in court.

So once again, don't let them tell you you can't.

You CAN send back anything you don't like or don't want anymore within 7 days of delivery if you bought it online (and you should buy online because that warranty doesn't exist with brick&mortar shops).

You ALWAYS have a two-year warranty on new products and one-year on used products when you buy from a company (very important when talking used cars with any dealership, don't accept any less without a big discount).

That warranty is always that company's responsibility, and yours stops at bringing the item back to them (there is a gray area for online purchases, so pick sellers with free returns).



So about that Aldi thing...
1. After two attempts, I got one of the senior staff of the Aldi shop in Rixensart to make a request for a copy of the ticket.
2. After a month or so, I got the ticket ! I then brought that to the shop that told me they weren't allowed to do anything about it.
3. I then contacted their customer service that started asking me for details about that purchase.
4. I reminded them of the law and kindly asked them to respect it.
5. I got ignored.
6. I sent an email, following their return procedure
7. I got ignored.
8. I trashed the stuff and made a note to work harder so that I wouldn't ever consider buying stuff from Aldi ever again.

Monday, July 8, 2013

The meaning of life (42)

The question was: what is the meaning of life, the universe and everything.

The meaning of life for humans is procreation, we exist because our parents procreated.

To procreate is to multiply, the symbol is *.

And the ASCII code for that symbol is 42.


Douglas Adams stated he picked the number at random, so I'll just consider my idea an elegant invention.

Sunday, June 30, 2013

Google HTML /CSS Style Guide Review

HTML


? Omit the protocol from embedded resources. 
? Indent by 2 spaces at a time.
? Use only lowercase.
? Remove trailing white spaces.
? Use UTF-8 (no BOM).
? Explain code as needed, where possible.
? Mark todos and action items with TODO.
? Use HTML5.
? Use valid HTML where possible.
? Use HTML according to its purpose.
? Provide alternative contents for multimedia.
? Separate structure from presentation from behavior.
? Do not use entity references.
? Omit optional tags (optional).
? Omit type attributes for style sheets and scripts.
? Use a new line for every block, list, or table element, and indent every such child element.
? When quoting attributes values, use double quotation marks.

In order to remain consistent across all languages and platforms, 4-col tabs should be used.

TODOs have nothing to do in the markup. Task management is a good thing, do it.

Since HTML is bound to be semantically incomplete, a line must be drawn. I draw it after div,p,li,span,a,h1.

Omitting optional tags is murder. One does not simply self-close tags. (optional tags are inconsistent, break XML compatibility and thus many HTML parsing tools, just to save a few characters)

Consistency > all, and if you're going to pick only one quotation style, pick the same for CSS and HTML

CSS


? Use valid CSS where possible.
? Use meaningful or generic ID and class names.
? Use ID and class names that are as short as possible but as long as necessary.
? Avoid qualifying ID and class names with type selectors.
? Use shorthand properties where possible.
? Omit unit specification after “0” values.
? Omit leading “0”s in values.
? Use 3 character hexadecimal notation where possible.
? Prefix selectors with an application-specific prefix (optional).
? Separate words in ID and class names by a hyphen.
? Avoid user agent detection as well as CSS “hacks”—try a different approach first.
? Alphabetize declarations.
? Indent all block content.
? Use a semicolon after every declaration.
? Use a space after a property name’s colon.
? Use a space between the last selector and the declaration block.
? Separate selectors and declarations by new lines.
? Separate rules by new lines.
? Use single quotation marks for attribute selectors and property values.
? Group sections by a section comment (optional).


The only standard that works across everything, from SQL to Shell to any language, is underscores. Since technology is mostly about linking technologies, incompatibility for no reason is unacceptable.

I'll use spaces when I want to. Useless spacing left and right serves no purpose, unless you're using a broken font to edit code with.

Then back to the single/double quotes discussion.


Conclusion


Overall, the google style guide for HTML/CSS is not half-bad, but it makes critical mistakes on the consistency side (underscores, quotation marks and indentation).


I'll finish with their own:

Parting Words

Be consistent.

If you're writing a coding style guideline, focus on consistency above everything else.

Friday, June 28, 2013

REST (for the web), explained and reviewed

REST is about using HTTP features when they exist and serve your purpose.

URI


Don't: 
/api/edit/post/3
/api/read/post/3
/api/users/frank/post/3

Do:
/api/post/3


HTTP Methods


Don't:
/api/create/post/
/api/edit/post/3
/api/read/post/3
/api/delete/post/3

Do:
HTTP PUT /api/post/
HTTP POST /post/3
HTTP GET /post/3
HTTP DELETE /post/3

Languages


We want to keep the URI unique, so:

Don't:
/api/en/post/3

Do:
Use the language header in your GET request:
Accept-Language: da, en-gb;q=0.8, en;q=0.7

Hypermedia (HATEOAS)


The idea here is that you wan't to specify all possible actions within the HTTP answer and as Hypertext, rather than have the application generating them based on external (to the request) information.

Don't:
Add a Save button because you just created an edit form.

Do:
Add a Save button because GET: /api/post/3 contained the information for that, e.g. (in XML for clarity):
<action link='http://mysite.com/api/post/3' method='POST' name='Save'>
This is still unclear to me (and it would seem, to everyone who blogs about it), but it may be that the only practical way to implement a so-called "Hypermedia API" is just a plain old website, with API structure and URLs.


Representation


The URI is unique, but the representations are many.
You should specify the representation using Accept headers like so:
Accept: application/json
for standard application requests,
Accept: text/html
for basic/crawler requests, or even
Accept: application/xml
if you have customers with an XML security filter.

Self-Description


The vision is that you should be able to get all the relevant information for API use from the API itself, and the proposed solution is to use media-types for that (application/myapplication+json or application/json;app=myapplication).

I haven't found anything relevant on this topic so far, meaning that while some people have some approaches to the solution, nobody seems to have a standardized clean and simple way to do media-types.

In other words, create your own media-types, stay tuned for news, keep it simple, make sure your own application gets the meta-information from the HTTP response, rather than from your code.


Architecture Constraints


Captain Obvious to the rescue:

Client/Server: the Server only acts as a Server, it does not hold state.
The Server stores no client state (things that can make the application behave differently, outside of session id and authorizations)
Cacheable: do set your cache headers.
Layered System: whatever the infrastructure, the client cannot tell the difference and sees only an endpoint
Code on demand: you can include .js files in your /js directory !
Uniform Interface: keep it standard and simple thanks to the points above.




Ok so now you know what REST is. Time to review the concept:


The good


One of the objectives is to make web services as straightforward and self-descriptive as possible, using existing tools where they fit the bill.

Self-Descriptive services are awesome, and using media types for that sounds like a good idea.

The use of accept headers for language and type concerns sounds better than /en/ although that means changing languages should be made easy (I often edit URLs because the change language mechanism is either dozen-clicks or hard to find).


The bad


While there are enough HTTP methods to handle CRUD, that is not going to be enough for more advanced cases, as e.g. a "report" item, that requires CRUD + Run, making the use of HTTP methods as API methods  a source of inconsistency.

Self-Descriptive services are awesome, but the REST architecture does not specify a standard for this, leaving everyone to do different incompatible things.

Because an API is a tunnel for a remote application to call local functions, the approach of Unique Resource Identification through URLs (i.e. /item/id/method)  is a leaky abstraction.

One of the problems is that it results in an Object Oriented API, with a much wider logical map, meaning that you will have a hundred (virtual or not, same problem for clients) create relation methods like object/id/relate/object2/id2 /object/id/relate/object3/id3, etc. instead of the single method approach like relate/object/object2/id/id2.

Another problem is that it implies inconsistency as non-resource methods won't fit in such a schema anyway (such as login, metadata (relations and items, rights) ), a bit like Java needs a container class to define a function.

It could work if you're only writing APIs with basic public resource access, but that isn't generic in any way.

In order to remain consistent, it is thus better to simply avoid the OO/REST API style altogether (unless you still believe in OO).


My Conclusion


REST has some good points and those are likely to be part of any good API architecture.

However, there are too many bad points to consider REST a valid API architecture, and only one practical good point, with Hypermedia that may or may not be a good point, depending on how the alternatives fare.

HATEOAS is a curious idea: on the one hand it means you need two representations for your API (you don't want the HREF overhead in your application), but on the other hand that secondary text/html hypermedia representation could serve API documentation purposes and maybe even crawling purposes.

It seems to me like a Hypermedia API simply means providing an html1.0 presentation for your API with  human-readable documentation associated with its text/html media-types.

It may have value but then they should've just called it that instead of adding more buzzwords.



Links




Wednesday, June 19, 2013

Invisible MMO cell boundaries

Imagine you're flying a space-capable plane, fighting enemies on the ground. (that's obviously one cell, managed by one server).

Then another aircraft starts chasing you and you decide to turn this into a space battle, knowing your faction has a much better advantage up there than down there (they moved a space battle station above the position they were attacking, but the ground forces of the enemy are still alive and kicking, for example). (the space will obviously be in another cell because it's so incredibly far, and contains it's own lot of objects and stuff).


Now in most games you'd climb and climb, and see a loading screen, then climb again.

That would make no sense at all from the perception of the aircraft in pursuit, and would break a lot of the immersion.


We need a solution without any loading screen or visible server switching to make this just awesome (imagine an aircraft chase from the ground into space, a space battle, and then the chase reverses as the hunter becomes the hunted, and flees down to the ground -- no interruptions, no loading screens, only amazing immersion).


For the server-side, I think the solution would be slightly overlapping cells, and silent connection to the next cell's server (servers actually because if you aim for the top corner of a cube you will be in a 4-cell overlap) at that point, tracking the aircraft and its projectiles in both grids.

The client would have a proper solution for dynamic loading of resources (a bit like a moving window concept) and would reconcile the inputs from multiple servers, discarding duplicate input (like when four cells track the same missile, only one collision will be recorded and the rest is safely discarded) and making a seamless transition between two cells, effectively creating an infinite immersion, no 1000m flight ceiling, no "go back to the combat zone", no crap.

If you want to take the 60 miles detour to attack from a better angle, suit yourself.
If you want to go from the planet to its moon in your little aircraft, suit yourself.

Friday, June 14, 2013

Code indentation

4-column tab




Consistency > your feelings about code indentation.





Spaces will never be consistent and require a lot of text-processing on commits and reads, etc. with no added benefit.

Tabs are semantically correct: one tab indents, where some number of spaces could mean anything.

When you set your own column width preference for tabs, the view is altered. This is logically correct.

When your editor transforms four-space-indents into 2-space-indents, the content is altered. This is incorrect.


8-cols are obviously a joke, as one can only imagine the result of 8-col tabs, 32-col TTY and 4 levels of indentation.

2-cols are rather unclear considering how people like adding "one space" everywhere for "clarity".

3-cols is not a power of 2, so it doesn't work.

4-cols is the best answer, it's both clear, not too wide and a power of 2.

Wednesday, June 12, 2013

Code Avengers Review

HTML track

level one

This is just horribly painfully slow, i.e. exactly what a real beginner will find welcoming and useful.

More than that, there's a focus on realistic training, making you check for common mistakes you (and I) make when writing HTML, and really going step by step.

If anything, I would've found it too slow when I was a beginner, but then that always was my reaction to any kind of teaching, so I think it must be just fine as is.

Then it introduces you to google fonts ! which is just awesome.

Then I even learned about stuff I had forogtten (text-shadow and multiple shadows) and never cared about (lists, header, footer) and probably never will use.

I must say my biggest surprise is that not a word was said about divs, and divs are about the only thing I ever use in html5 nowadays - but then they say that's for the level two course - let's see if they provide it for free to a reviewer ...


And then, some things went wrong, too:



JavaScript track

level one

Slow, but not as slow as the HTML one, I suspect they expect people to do HTML before JS, so that's only fair.

Very smart pedagogy, pushing you far into the problem to make you want the solution, long tutorial, with reviews and everything, really thorough and yet not annoying (except the "game" pauses... who wants to shoot dem satellites...).

Much more fluidity than Codecademy as well, going from one task to the next is much less disruptive, there are no interruptions between "topics" either, really immersive and enjoyable.

Excellent pedagogy again by not holding the user's hand while remaining clear and simple.

Also they make you test every code path EVERY time, bit annoying but really good practice, until they give you unit tests that largely speed up the process (probably intentional to make you want to have unit tests).

There's a strong focus on results, letting you use whatever means you feel fit to achieve the results, very flexible (actually what I expected from Codecademy).

And then, some things went wrong, too:

  • 19.5: once again someone insists on using stupid broken two spaces tabs when the whole world has only one standard of four spaces tabs. Dear broken style standards, please die and let us have consistency at last.



Conclusion


This can't be final until I review their other courses, but right now I'd say that compared to Codecademy, they have a far uglier interface for the HTML, ok-ish for JavaScript, much better pedagogy and course quality, and I would not hesitate recommending them to newbies, for the HTML and JavaScript L1 course, with a mention to quickly forget the google style guidelines that were probably written by a team of drunken monkeys.

And while the interface may be a little bit less sleek, the pace, interaction and immersion is far better. You can literally sit at your computer for several hours just being in the zone learning the stuff.

The duration is fair, it took me a good two hours (writing this at the same time) for the HTML 1, 4 hours for the JS 1, when it's marked at 10 and 20 hours respectively for newbies - so really fair, probably 10 to 30% above average for newcomers.

All in all, if you're going to learn js or html, go to www.codeavengers.com, it's simple and a great start, at least for the first free level (didn't get a review token for the rest yet).

so @codeavengers, great job guys, time to expand, and please let that google retarded web style die, it's really not in the same class of quality as your teaching service.

Tuesday, June 11, 2013

Codecademy review

Now that I have also reviewed CodeAvengers, let's just not waste your time: If you're looking at HTML / CSS / JavaScript training, go there right away, and maybe come back to Codecademy afterwards.

In order for you to understand this review, let me simply remind that prior to trying Codecademy, I was an expert in PHP, and good enough in HTML/CSS/JS to write a web GUI framework that would destroy sencha and google's libs (disclaimer: I do have more important stuff to do)

I took two days to do all the language paths on Codecademy and a few API ones (somehow, most of the APIs are from services that I never heard about) to really understand the potential value of their approach (I love education and code).

About those paths:

  • PHP : Don't do this. this path is utter trash. It's wrong in many ways, covers mostly nothing, and ends with basic OO crap instead of learning how to code
  • HTML : It may be used for a first contact with HTML. It's very shallow (even much shallower than HTML itself) and is full of unrealistic and useless examples. 
  • CSS : Basically it's a good start, but really incomplete.  Shallow as well, but it ain't half bad, focused on retarded design approaches though, nothing about responsive or how em,%, absolutes can be your friends
  • JavaScript : A good first introduction as long as you know where they went wrong, otherwise it's potentially dangerous. this one is a bit better, very shallow as well, but since this is programming, they've added a bit of bad practice to it, in order to keep things fresh. 
  • Python : A nice first introduction, long focus on irrelevant things (who the fuck drops half of the items from their array unless you want to go threading?) but nice anyway. Unfortunately, lots and lots of bad coding practice, bad examples, no user input handling, etc.
  • Ruby : like Python, but better. Still some bad coding practice and no input handling, but the topics covered are a bit more relevant.
  • APIs: this part is murder: there's the twitter API, that requires you to log in (that's bad), the Youtube API, that doesn't and is infinitely simple, and then a bunch of useless APIs that noone is ever going to use in their whole programming life, and literally NOTHING about the facebook or linkedin APIs, which are the most likely to be used.



This was fun, I learned a lot, and the process was somewhat efficient, even though probably not as much as me rushing for two days in one direction (although that would be specialization and not introduction, and I've had major successes in self-teaching without any platforms).


BUT


  • way too much OO (I know the noskills are going to disagree, but OO is a philosophy and shouldn't be part of teaching code, surely not when teaching innocent noobs who couldn't possibly know better)
  • no serious languages (C,lisp,Fortran,ASM,C++)
  • it is obvious that most of the examples are written by people who code far worse than I do and don't have great teaching skills either
  • the only refactoring topic actually brought a perfectly readable piece of code to ok-ish length, and then went in the one-liner territory for no reason. (if you do prefer code density, just use APL, at least it doesn't have an alternative long syntax)
  • many courses have way too many steps, some of which are full of nothing, and there's no shortcut or hotkey to skip a useless step
  • most of those steps have very bad unit tests that allow either wrong answers OR far worse, refuse much better answers
  • most steps are really about hand-holding, and that's without checking the tips, which I'm sure do the rest of the work for you. That makes the progress much easier but really irrelevant, as nobody stores information they didn't have to search for (i.e. instead of talking about puts all the time, they should say it once, and make you remember it, telling you to "write" to the console, rather than print or puts or echo all the time)
  • I almost forgot, but their js crashes, often, and you'll have to f5 that tab in order to get out of it. I even got a blue screen (i normally never do) using windows 7, chrome and Codecademy - WTF.
  • Some of the obscure services API (especially gilt) have HORRIBLE horrible disgusting crappy tutorials, once more hitting the nail on the head about user-generated crap
  • Let's face it, if I could do the whole thing in two days, it probably lacks depth and value

In conclusion



It seems Codecademy (and their API tutorial page shows this, with lots of unknown services APIs using that page as ad space) is waiting for user-generated content, but I can't see that working - users are (everyone is) mostly average, but ideally you'll want tutoring material to be written by the best shrinks based on the content provided by the best coders.

I agree that would be far above university standards, mostly because of the better psychology, but then that's a vision that's worth shooting for.

At their stage and with their funding, the low quality of the courses, both in terms of code quality and educational quality, it feels as if they invested everything in marketing, and that's just sad. I think it's time for them to upgrade their tutorials, and expand far beyond their limited scope and quality.

Right now, I would recommend Codecademy only for people who will never really have to code but have to have an idea of how it can work.

If you intend to be a programmer, you're far better off without any tutorial, just asking questions and questioning the answers while you achieve real-world goals.

Tuesday, May 21, 2013

The Myth of the Century: Overpopulation

Overpopulation: is a generally undesirable condition where an organism's numbers exceed the current carrying capacity of its habitat (wikipedia, as good a definition as any other really).

As long as humans are able to remodel their habitat to increase it's carrying capacity, there will not be any human overpopulation on earth.

The surface of the earth never limited population density (HongKong is listed as high as 20.000 people per square kilometer, depending on whether roads, airports and others are taken into account), yet somehow it is expected to limit food production density.


How much mental energy does it require to go from 3d human habitat (skyscrapers) to 3d vegetal habitat (underground farming)?



The only problems we ever faced and will ever face are lack of vision, ignorance and mismanagement.



We are a race of unlimited potential in an unlimited universe, and nothing except our own stupidity can stop us - it does seem to make a good job of it though.

Thursday, March 21, 2013

How to remove LinkedIn Today

1. get Adblock
2. right-click on adblock, options
3. Manually edit your filters, add

www.linkedin.com##LI[id="today-news-wrapper"]

Click save, refresh, and enjoy a linkedin without the spam. Have a nice day


My linkedIn settings:
www.linkedin.com##LI[id="today-news-wrapper"]
www.linkedin.com##DIV[id="extra"]
www.linkedin.com##A[class="nav-link"][href="/mnyfe/subscriptionv2?displayProducts=&trk=nav_responsive_sub_nav_upgrade"]

They remove everything on the right, linkedin Today, and the upgrade button which is anyway useless.

Wednesday, February 27, 2013

Passive Cooling Datacenter

            In the last decade, datacenters have increased in power density so much that cooling even became a business concern.

            So far the conclusions are that watercooling is not really practical outside of experimental systems (such as the K supercomputer), air cooling tends to have high running costs, and oil is a mess.
            My design, inspired by heatpipe coolers aims to dissipate heat using standard heatpipes from chips to chassis, and then dissipate the chassis heat using a cold wall on both sides of the rack. The cold wall is actually just a rack-wall sized waterblock with bottom connectors for leak protection.
            The result is passive cooling at chassis level (with the exception of light airflow to keep the rest of the board cool), and extremely cheap standard scale watercooling.
            The hot water from the rack gets cooled by another cold wall which swims in an artificial river located just below the compute floor, with a slight slope and a single set of redundant pumps to keep the river flowing.
            It may be possible to further improve the efficiency by using huge heatpipes instead of cold walls, but then that would position the “artificial river” above the racks…
            So far, I’ve gotten my hands on some very nice heatpipes, but haven’t begun testing the cold wall part.