A recent court ruling in Frankfurt has banned Uber’s most popular service, Uberpop (Germany’s version of UberX), from operating in Germany, claiming it “unfairly competes” with local taxi services.
In June when Uber faced opposition in Europe, it resulted in the app flying up the ranks, so we wondered if this latest incident of free publicity affected the app in a similar fashion. We took a look at the ranks over the past day and saw this suspicion was confirmed:
It seems that even negative publicity yields some positive results for Uber.
This past Tuesday, Instagram released Hyperlapse, an app that speeds up and converts videos you take on your smartphone using time-lapse technology. With reports of a similar software, being developed by Microsoft for its GoPro camera and using the same name, we wanted to take a closer look. It turns out that Microsoft and Instagram are not the only two companies to use the name “Hyperlapse”.
An app released by Hieronymus Belt a year prior with the same name came up in the coveted first search result position for the first few days, while “Hyperlapse from Instagram”, billed as such, was #2. We wondered how many people, eager to take Instagram’s new product for a spin, may have accidentally downloaded the wrong app in their haste. We took a look at the charts for this app, which does not convert your own videos, but rather utilizes time-lapse to take you on a virtual tour of streets around the world. Here’s what we found:
After the original app skyrocketed up the ranks, it was quickly changed from a free app, to paid (at $.99) in an attempt to capitalize on the free traffic. The following caveat has also been added to it’s description: “Hyperlapse by Instagram is another app which work differently”.
In the past, we’ve seen apps being named after already-popular and successful predecessors (for example, the inundation of apps with “flappy” in their name), in an attempt to ride their coattails to the top of the ranks. However, with this happening in a reverse order, we see that a case can be made for the power of mistaken identity.
We recently introduced a brand new section to the blog called Data Bits. This is where we choose an interesting set of data, analyze it, and turn it into bite sized blog posts for your reading pleasure.
Last time we looked at app reviews and how many of them a typical developer can expect to have. This time we’re diving a bit deeper into the data to find out exactly what sort of message those reviews are trying to communicate.
While there are tons of reviews in almost every language, in this post we’ll be focusing specifically on ones written in English. Most text analysis tools are built around English which makes those reviews easier to analyze. It also happens to be the language we’re most familiar with.
After some internal discussion over the best way to slice the data by language, we decided to grab a slice of iOS and Mac App Store reviews from these major English-speaking countries: US, Canada, UK, Australia, and New Zealand. Our sample comes out to roughly 25 million individual reviews–more than enough to give us an idea of what’s going on in there.
Visualizing it all
Next we needed to decide how to analyze and present so much content. After much brainstorming we settled on a method which is both simple and effective: word clouds. Once all the pieces were in place, we threw all 25 million reviews into the blender and here’s what came out:
click images to enlarge
We were pretty surprised at how positive this word cloud seems to be. Being app developers ourselves, we’re quite familiar with how picky reviewers tend to get, and we assumed that reviews would have been a bit less glowing and slightly more critical. So we ran the numbers again but the same results came out. ‘Great’, ‘love’, ‘fun’, and ‘good’ are used way more often than words like ‘poor’, ‘useless’, ‘waste’, and ‘sucks’.
And that’s it… NOT
Just because a word is positive or negative on its own doesn’t mean there aren’t other words in the sentence modifying it. While evolution has fine-tuned us humans to identify such language nuances, it’s not so easy for a computer. So we started tinkering with the data to see if there’s anything clever we can do to get a better idea of the context around each word.
We started off by sectioning the reviews according to their star rating. We figured that the star rating (1 – 5) of a review is usually a good indication of its overall sentiment.
We turned to the blender once more, this time creating a cloud of words from only 5-star reviews.
Compare that with a word cloud of all 1-star reviews:
There’s a definite contrast here, showing that words like ‘love’ and ‘beautiful’ aren’t thrown around as much in very negative reviews, while words like ‘crashes’ and ‘waste’ aren’t very popular in positive ones. We did the same breakdown with star ratings 2 through 4, and, as expected, there was a gradual change in the use of positive and negative words.
Adding some color
Armed with this new information we decided to try something crazy: We’ll assign a ‘positivity’ score to each word depending on how often it appears in positive (highly rated) reviews and how often it appears in negative (low rated) reviews. We then recreated the original word cloud, this time coloring words with a high score green, those with a low score red, and everything in the middle gray.
We weren’t sure what to expect out of this experimental analysis method, but it turned out to be pretty spot-on. We were surprised at how well the algorithm does at coloring words with a negative connotation (such as ‘crashes’, ‘waste’, and ‘useless’) red, while highlighting the positive ones (like ‘great’, ‘love’, and ‘good’) in green.
So it looks like what we suspected originally about the critical and picky reviewer was wrong, and that the first word cloud above was pretty telling on its own: there are way more positive things being said about iOS and Mac apps than negative. Who would have thought?
Not too long ago we built a completely new way of tracking app reviews by caching data for speedy full-text search and advanced filtering. That means we’re now downloading every single review for more and more apps every day in addition to the aggregate review data we already have. As you can imagine, that data occupies a lot of bits on our servers, but it also leaves us with tons of juicy data to analyze. In the coming weeks and months we’ll be writing several blog posts analyzing this data set and others.
It’s no secret that a few popular apps get a lot of reviews while most don’t even come close. We wanted to quantify this sentiment and figure out how many developers get how many reviews. We’re happy to share this information in the hope that it gives developers some insight into how their portfolios compare with those of other developers.
Our data set
There’s a lot of app review data out there from numerous countries and several stores, so for the sake of keeping this post short we decided to focus on a pretty specific slice of data, containing only iOS apps. Our sample consists of over 70 million aggregated reviews from 151 countries, which is enough to show some interesting and meaningful data.
Looking at this chart, three interesting points immediately jumped out at us:
- A typical iOS developer can expect between 1 – 5,000 reviews for their entire portfolio (a pretty huge range if we may say so).
- Half of developers have more than 500 reviews.
- 5% of developers don’t have a single review for any of their apps (not even from mom!).
Also, the chart has three fairly equal slices inside the 1 – 5,000 range. This means that an almost-equal number of developers fall into the 1 – 100, 101 – 500, and 1,000 – 5,000 ranges. In which camp do you belong?
Looking at the extremes
A lot of developers would agree that a very large portion of reviews is held by a small portion of developers (that group in the small green slices in the above chart). Again, we wanted to quantify this amount, so we took various-sized chunks of the developers with the most reviews and saw how many reviews they held (out of total reviews).
So it looks like a quarter of reviews are held by just one tenth of a percent of all developers. These are the developers in the extremes of our previous chart. In this case, the extremes are huge, with some developers having portfolios with millions of reviews.
This is all we have the time for today but there are still some unanswered questions: what types of apps are those 0.1% most-reviewed developers creating? Do specific genres or prices of apps draw more feedback than others? Are these reviews largely positive or negative? Next time we’ll explore some of these questions and more.
Almost everything we do is inspired by you, our members and readers, so please share your thoughts on this blog, twitter, or any of the other virtual hangout places out there.
As you know, we track hourly rank changes. Yesterday we started seeing a strange trend in the movement of ranks — a constant zigzag. Up and down 50 spots or more…every hour!
This is happening in most countries around the world and effects many apps outside of the top 25. It’s gotten flatter over the last few hours but it’s still visible.
At the moment this does not look to be effecting apps in the Mac store. Just iOS apps.
Is Apple making changes to the algorithm? Is the App Store experiencing technical difficulties? We don’t really know but we’re keeping an eye on it and will update this post.
Our last post focused on the prices of top-grossing apps. This time around we’ve decided to look at the developers behind the top 400 apps. We’ll be looking at public ranks data for the top 400 paid, free, and grossing iPhone/iPod apps in the U.S. during May 2011.
Let’s look at the total number of devs making apps in each category. Among the devs who made a top ranked iPhone app in May, 2011, on average…
- 311 devs are responsible for the top 400 Free apps
- 281 devs are responsible for the top 400 Paid apps
- 259 devs are responsible for the top 400 Grossing apps
To visualize this, we’ve created a Venn diagram so you can see exactly how many developers there are in each category and in the overlap between categories.
It’s probably a good time to point out that when we say “developers” or “devs,” we include publishers. On Average, 22% of the top grossing devs make more than one top-grossing app. Most of these developers have 2-5 top grossing apps. A select few have more than 10 top grossing apps at any time, but there are a couple of consistent rockstars: Electronic Arts averaged 25 top-grossing apps at any given time in May, and GameLoft averaged 12. When it comes to the big guns, this data confirms our expectations.
Among the devs in each category there is some overlap—some devs have made one app that’s ranked in the free or paid category and is also a top grossing app. Other devs are in the overlap because they’ve made multiple apps, each app ranked in a unique category.
Let’s take a closer look at the 259 devs making top-grossing apps (or, if you’re like us, let’s take a closer look at the blue bubble):
- 40% of the devs with top-grossing apps have also created top paid apps; 13% have created top free, and 20% of them have placed in all three categories.
Looking only at the devs making free top grossing apps, if we add 13% to 20% to get the full percentage of devs making free apps, we get 33%. Doing the same thing for devs making paid top grossing apps, 40% plus 20% gives us 60%. Since 20% of these devs made free and paid apps, that leaves a whopping 27% of devs who made top grossing apps that are not ranked in the top 400 free or paid categories.
Of course, we’re left wondering: who are these 27% of devs with apps that manage to make a lot of revenue without ever hitting the top free or paid charts? What kind of app pulls in enough revenue to make it into the top grossing category, but isn’t popular enough to make into free and paid categories? We’ll be covering this very issue in one of our upcoming posts.
Like the charts? Download the media package
appFigures collects hourly ranks from the App Store, so we have access to every app’s rank and its price. This has lead us to ask, what will make you more revenue: selling a free app with paid in-app purchases or a paid app?
In the month of May in the U.S. App Store, 69% of the 400 top grossing apps were paid apps. The other 31% were free apps.
Among the 100 top grossing apps, you’ll see that free apps start catching up to paid apps. 41% of top grossing apps are free, 59% are paid.
This might come as a surprise to anyone using freemium games’ in-app purchases to finance the world’s largest virtual zoo or most tricked-out restaurant. But hold your horses (or lions or burgers) because free apps do start to surpass paid apps within the 25 top grossing—51% of these apps are free, 49% are paid. So among the highest grossing apps, the freemium model is making as much or more than the paid model.
Here’s a breakdown of those three tiers of top grossing:
The freemium model appears to be a viable means of making revenue. Not only this, there are limitations to how profitable a paid app can be. Even if a paid app is extremely popular, the large majority of apps are dirt cheap. Most of the paid apps in all three of these tiers are priced at 99¢. The second most popular price is $2.99. In the top 25 grossing tier, 55% of the paid apps are 99¢ and 18% of them are priced at $2.99.
Here’s a look at the pricing breakdown in these top grossing tiers:
Last week when the Mac App Store opened its virtual doors we began importing your Mac Apps data. We’re excited to let you know that as of Sunday evening we’re also tracking your ranks. So now you get the total package.
Tracking Mac App Store ranks has brought up some interesting numbers regarding the distribution of apps. What kind of apps have found their way into the Mac App Store? Check it out:
Good news: The U.S. App Store has thawed and ranks are moving again.
As of 2pm EST 11/1, the three and a half day freeze is finally over. We still don’t know what exactly caused the lack of movement in ranks, and we probably never will. (Apple keeps secrets better than whoever offed Jimmy Hoffa.) On the bright side, we can all take comfort in knowing where our apps stand. At Last.
As you know, we track hourly rank changes from all App Stores around the globe. On Wednesday we noticed odd behavior from every app store—suddenly they were all stuck. For eight hours 99% of apps did not budge. We were just about to blog about it and then movement resumed. A fluke, we thought, wandering away from the single legal pad we’d all been crammed around. But then it happened again! The U.S. App Store’s been frozen for the last 24+ hours!
Let’s look at the numbers:
On average, 65-95% of apps move in the U.S. store every hour. Actual increases and decreases in rank depend mostly on the time of day, with midday (1pm – 4pm) seeing the heaviest movements. This is true for both iPhone and iPad apps.
Since 4am yesterday the percentage of apps jockeying for rank has dropped dramatically to just 0.5% – 4.5%. What’s also interesting is that of the small percentage of apps that did move, the majority weren’t in the top 100, and they were restricted to just five categories, while every other category remained motionless. Weird.
We like visuals. Here’s a breakdown of percentage changes within the last 14 hours:
A snapshot of approximately the past 30 hours:
Is Apple changing its ranking algorithm? Are the current ranks being pulled from an old cache while Apple tries to fix a problem…and is Apple having the same difficulties they had a year ago?
Assuming we find out what’s behind Apple’s strange behavior, you’ll be the first to know. It’s days like this I’m glad we keep a private investigator on staff. Kind of.
Stuck up high? Stuck down low? Stuck entirely off the grid? Share your frozen experience.