Sunday, June 4, 2017

PayPal has given up on buyer protection

When PayPal started, it was clear that buyers were 100% protected no matter what happened, and I loved it.

Unfortunately, some less scrupulous people started abusing that system to scam sellers out of their products.

And so, PayPal attempted to stop screwing sellers over by adopting new policies.

The result of which is very simple: there is no buyer protection anymore.
Instead, PayPal guarantees to unscrupulous sellers that as long as they have a proof of delivery, it doesn't even matter what they ship, they'll keep the money.


Now, when someone on ebay sends you the wrong item or even just random crap, PayPal insists that you take your personal time and money to return the item to the seller before any refund takes place.

That seems to make sense, until you try to apply it to all these really cheap items we tend to use ebay for, like $1-20 items, that may come from Hong Kong, or another country, and generally cannot be shipped back for less than half the price of the item.

Add to that the time that it takes to properly repackage the item and then drop it off at the post office, and the customer is basically screwed in every case.

In other words, there is no real buyer protection anymore and people should probably stop using PayPal for any small transactions, given their support of scammers.


I guess full buyer protection was too good to be true, it's sad to see PayPal give up on that though.  I guess I'll use amazon more and more, until they too decide that customers should accept being screwed over by sellers.

Friday, January 31, 2014

If you need a CTO, clap your hands

A great many startups are looking for CTOs, and I've seen quite a few stuck for several months without progress because of that, so I thought I'd share a little something to help them.

Why you really need a CTO


Oftentimes, people ask me to be their CTO thinking I'll be coding for free for some months before I can confirm that they can execute on the business end.

Other than being a not-very-interesting proposition, what terrifies me is the complete disregard for their startup's real needs.

What's missing in a startup that has no CTO is not a free coder, it's the insight required to provide the business with:
  • feasibility assessments
  • cost estimates for different plans
  • insights into technology to uncover better options
  • the best technology choice for the business objective
  • skill assessments to select the best coders
  • IT project management to ensure the execution of the plans
  • and so much more.
Many startups that picked their CTO as "just a free coder" have the wrong CTO when it comes to the above points, which are critical factors in terms of product quality.

Additionally, skilled professionals such as myself are not really interested in working for cheap on somebody else's idea, not only because the current market values our skills quite a bit, but also because every creator out there has a lot of ideas of his own.


So what's the plan ?


First, you need that CTO - advisor, that guy who's going to give you a chance at going in the right direction product-wise.
You will find that a lot of highly skilled professionals will be more interested by the real CTO role than the free coder proposition, and it will be easy to negotiate that for less equity than the old-style CTO-freecoder package.

Second, you must accept that the product you're going to base your whole business on may cost you some cash, just like a lot of your marketing will.

I usually price top quality v1.0's between €15K and  €25K, which I bundle with a "Backend as a Service" proposition that lets you run your startup without tech guys until it starts making money and further versions can be considered.

It may sound expensive, but it's actually really cheap, and you most likely have 15% equity that you can find investors for, who will have the conviction that you can at least deliver your product, given the track record of the company you've contracted to build it (me, for example).

It's also important to reserve a bit more cash than that, as you'll inevitably order "not exactly what you need", and you must be able to get a v1.1 out before you run out of budget.



Either way, good luck.

Wednesday, December 4, 2013

Moo.com > review

Offering

Moo only offers a few different options for business cards, and that's fine because most people don't want wooden, plastic or metal cards.

A very good thing about Moo is that they offer you to get started with only 50 cards, which definitely helps when you're still iterating on your design or want to try out a company.

Another awesome feature that I did not use is that they allow you to print different backsides on a business card.

This enables you to order cards for 5 (or 50) different persons in one batch, or to add some diversity to your cards, like showing off your designs for designers.

Customer Service

So far, just fine, acceptable delays, very polite and forthcoming.

I expected them to deliver on the reprinting of my cards ...

Well they messed up. The reprint is much better, but there are still cards that have problems (about 10%), and the shipping was one day late, plus one day weekend delay, i.e. I got my cards after the event was over.


Shipping Delays

They were one day late, apparently related to a printer malfunction - I'd have appreciated the communication before asking, but I got an answer fast and the cards as well.


Printing Quality - first try

The 4 paper layers stick together ok, the edges and corners are fragile as expected, but impeccable at delivery.

The paper is fine, the texture looks nice and the touch is good, it doesn't seem as exceptional as they make it up to be on their product page, I guess a paper expert will have much more to say about this.

The cards are very slightly warped, completely invisible to all the people who don't study your card from a distance of 10 centimeters.

The precision is great at a normal distance, but if you study closely, it's not that great, like standard photo printing.

It could be a limitation of the source (they requested 300dpi) image, but since I couldn't send svg I'll never know - and it did not matter for me.

The color (I only used black ;) ) is bad, as in worse than my home cheap laser printer on normal settings.

There is serious white bleeding on the edges (mostly the top) of every card, plus scratches on some.

The white background on the other side of the cards is ruined for all cards but two (the top card from each 50-stack), by the black ink from the backside of the previous card, light smears that ruin the cards.

Printing Quality - second try

The color is still unimpressive, there is much less bleeding on the black backgrounds, and most of the cards had a properly white background.

Nothing else changed from the first try.

Evolution

Apparently, Moo.com had many failures in the past, but have been continuously improving, from unusable software and buggy printing back in 2010, to misaligned cards in 2011-2012, to my color problem today, to hopefully fixing all of that and future problems.

Still, they have a long way to go to become 100% trustworthy printing partners

Wednesday, November 20, 2013

Tweeting during Events : Is event promotion more important than attention ?

There is a definite trend (at least in the startup world) of events that push people to tweet about them.

Startup Weekend is especially involved in that, spending several minutes every weekend to push people to tweet.

Not only that, but there are "tweet screens" distracting the audience during important pitches and other presentations.

I know social media promotion is important, but is it really worth sacrificing the quality of an event ?

What do you think ?

Monday, November 18, 2013

Transcend Communication

I hate cities.

Today, cities exist because human beings need to meet face to face.

They need to meet face to face because every other communication means is an ersatz.

A worthy goal would be to uncover what is missing from the videoconference, and create a new means of distant communication that can transmit that.

Steps


Human Research

The plan consists of a research phase, involving actors, Korsakoff patients and MRI.

Using a variety of communication means and the MRI to analyze the patient's reaction to the actor's delivery through those means, it should be possible to extract information about the missing communication bit.

Actors are the best candidates as emitters because they're the most qualified to express the same feelings with low variance.

Korsakoff patients are the best candidates as receptors because they are the only people capable of being a blank slate every day.

Care should be taken that both emitters and receptors be shielded against parasitic information, and what remaining parasitic state be recorded as experiment metadata (i.e. MRI before starting for both of them).

Prototype Research

Once the perceived difference is observed and qualified/quantified, it will be time to try and understand the nature of the missing communication waves (most probably em, brain waves, but you never know).

I think the best approach to that would be to build a 2x1m "screen" that would of course output light and sound, but also replicate the em input.

And on the other side, you'd have a set of "cameras" capturing video, audio and em input.

Since the missing communication part is almost certain to be transmitted in straight line in the real world, a single 2x1m "em window" should do the trick.

Then, we would once again use the MRI to compare with real-world face-to-face communication.

Money Time

Once a two-way solution is built, this system will enable people to move out from big cities without missing out on job opportunities.
Considering the housing costs in New York or Paris, it is clear that even if the prototype were to cost $1Million, there would still be customers (a company in Silicon Valley could use one such prototype to human-link with an office in Europe where they hired three engineers for the price of one in the valley, for example).

Friday, October 18, 2013

Performance matters

If spending a few days on improving overall performance can save you 30% of your server costs, you should probably do it.

That means 



  • getting rid of your prototyping language for something more fit for production (drop ruby/php, get python or drop java/python, get cpp, or drop cpp get c).
  • reworking your datamodel to optimize transfers, seek times, etc. through caching and pre-computed data sets
  • getting rid of your prototyping datastore for something more fit for production (drop mysql, get postgresql, drop NoSQL as main data store if your model contains more than one different item, drop NoSQL as main data store if the only thing you need is an ultra fast K/V store)
  • getting rid of your prototyping infrastructure for something more fit for production (drop your standard ec2 and get something optimized for your needs, do some profiling, fix your bottlenecks, use physical boxes with loads of RAM and SSDs)


For any big company, it should be simple to make that choice, you may invest a few hundred thousand in research and save 60%+ on a huge budget (like facebook could and has begun doing).

For a startup, it's even more relevant, since that extends your runway significantly, and pushes back the scaling discussion an order of magnitude or two (in number of users) later.

Usual replies:



  • "a language fit for production will reduce productivity" : 

that's never been proven in any way. it's pure OO propaganda.

  • "ruby is ready for production" :

rails had an sql injection weakness (the kind of security hole a 1-month beginner would never leave open), and it's much slower than Java or Python

  • "Python is ready for production" :

it's still changing rapidly, has no stable version that could have been time-tested for security and performance, and we're still talking 4x slower than C at least

  • "MySQL is just fine" :

most of the listed features are buggy, slow and incomplete, and the declared feature set covers maybe 5% of the SQL standard, most of it being incompatible with said standard.

  • "NoSQL is scalable" :

noSQL is just scalable by anyone including those who have no clue at all about databases, and is made scalable by being inconsistent.

  • "the cloud>you" :

go ahead and try to get one million IO per second in that cloud and tell me how much it cost you, I'm willing to sell you a physical box that does that for less than half of your yearly costs

  • "your approach isn't scalable" :

don't you think pushing the limit from 1000 concurrent users to 100.000 concurrent users on the same box is already a good step towards scalability? don't you think it'll make the scale out that much easier (i.e. a thousand boxes instead of 100.000 ? )

  • "your approach is more expensive" :

less servers, less expensive servers, more performance per server, ... it HAS to be more expensive indeed.

  • "<professionals I know> say you're wrong" :

In a field where 99% of the people don't understand what they're doing ? picture me shocked.

  • "facebook is right, you are wrong" :

I'm not telling you blue isn't the best color, I'm telling you a 30mpg car will go three times as far as a 10mpg car on the same fuel.

It's logic and mathematics, it doesn't depend on points of view, feelings or past experiences.
The only thing one could discuss is the actual cost of the performance improvement, but even rewriting all of facebook in pure optimized ASM would cost less than 10% of facebooks yearly server costs.

jQuery live is dead

the new live

So here's how I reintroduced $.live to my jQuery.
Unfortunately, .on(action,selector,func) does not seem to understand anything but the most basic selectors, and that makes it much much worse than the old live, which could handle 

$('.module_container .core_data th.secondary').live('click',function(){});
This, as well as most complex selectors are now impossible to .live() it seems (or maybe my version of jQuery is full of bugs, it's called 1.10.1 or whatever).
(function($){
    $.fn.live = function (action,func) {
        $(document).on(action,this.selector,func);
        return this;
    };
}(jQuery));



live vs on

You should still try and learn .on, which lets you put the delegation on any element rather than the document itself.
If your element has 0 static parent elements apart from body/html (i.e. it is dynamically added to the body or html), live is the correct approach (and having javascript create 99% of your html is the correct approach to the modern web application).
Otherwise, you should be using .on in order to make event handling lighter - the lookup for the handler can become slow if you add a zillion handlers on the same element, which is what happens when you use .live for everything.




event delegation is slow

The reason live and on are slow, is because when an event is delegated using the jQuery method, an event handler is created on the document (or your selector) that will check whether or not the bubbled event target matches the selector.
So what's actually slow is the selector itself, and the very long list of if(match(ev.target,selector)){callback1234();} which is bad design, whether you use "on" or "live".




the solution

If you still hit performance bottlenecks related to handler lookup, you should consider dynamically adding .on handlers to children elements (delegated event delegation), or go all the way and dynamically adding handlers to the elements you create (no event delegation).
It mostly depends on the percentage of events that are actually triggered and the number of different events that bubble up to the same delegated element, but you can have your answer with just profiling.
Actually, $(document).on or live both get you your handlers in every case. $(anything else).on will fail if it's not handled post load, i.e. packaged in a function and called on purpose in .ready().



the best solution

Drop jQuery. It takes 96KB to do mostly nothing. Write your own event delegation and enjoy a freedom that far surpasses both what .on() and .live() ever did.

Learn dynamic event delegation delegation ;) (it's called eventception)