Archive | Geek RSS feed for this section

IE6 Support Going Away November 15, 2010

25 Aug

Dearly Beloved, we are gathered here today not to praise IE6 but to bury it.

Microsoft released Internet Explorer 6 on August 27, 2001, almost exactly nine years ago. Since then, it has been superceded by not just one but two separate versions, and IE9 is near release. IE6 has at least 23 unpatched, serious security holes in it – Microsoft has no plans to correct any of these and these holes may have resulted in serious security breaches at Google reported in early 2010.

Furthermore, IE6’s old age means it lacks support for many of the features that make it easy to develop powerful and sophisticated web applications – this means that when we release a new version (which we do twice a week) we need to do extra work to get PBworks to function correctly with IE6. Now that IE6 usage is finally (well) below 5% of our userbase, we’ve decided to formally retire our support on November 15, 2010.

On September 15, 2010, all IE6 users visiting our site will receive an in-product message alerting them that they have two months to upgrade to a more modern browser. We strongly recommend Google Chrome 5+, Safari 5, or Firefox 3.6 for cool features like drag and drop file uploading, although we do also support IE7 and IE8. Opera users – yes, we make a best effort to work well!

On November 15, 2010, all IE6 users will be redirected to our mobile site; this will ensure that they are still able to consistently and correctly access their data but to use our more advanced technology, users will need to upgrade per the above.

PBworks adds support for Google Chrome, drops Firefox 2

6 Oct

I do a lot of cross browser testing as part of my job as Gentleman of Quality (Head of QA) here at PBworks and I keep a close eye on which browsers our users are adopting and which ones are fading away. I’m pleased to announce that we’re adding support for Google Chrome, an excellent browser that is rapidly becoming the standard for high performance on the web. In addition I’m glad to see the vast majority of Firefox users have upgraded to the latest version of that browser. If you haven’t done so already (and just over 1% of our users have not) please take a moment to upgrade. PBworks will no longer fix bugs that appear exclusively in Firefox 2.

Modern browsers are faster, more secure, and much more helpful. We understand there are a number of you who are still forced to use ancient, dangerous, and painfully buggy browsers such as IE6, but overall the adoption of new browsers has been surprisingly quick.

(Note: We are still supporting IE6, since many corporate IT departments mandate it, but if you have a choice to upgrade to IE8 or another modern browser, we *strongly* recommend that you exercise that choice!)

PBworks is excited about the possibilities that modern browsers allow for and want all of our users to share in those benefits. Upgrade your browser today:

Google Chrome
Firefox 3.5
Internet Explorer 8

Ian Danforth
Gentleman of Quality

A/B Testing at PBworks

16 Sep

At PBworks, we take our data seriously.  So it should be no surprise to learn that we use A/B testing techniques to aid our product and website development decisions.  Having a web-based product means that we can quickly learn what our customers like and what they don’t like and make changes accordingly.  If you’re not familiar with A/B testing, Avinash Kaushik has a great primer.

Analyzing Test Results
As the data analyst here, an A/B test for me can be reduced to just a few simple numbers.  Those would be: (1) the difference in conversion rate from the control group and (2) the level of confidence we have in that difference.  The first number is easy to calculate and explain to the rest of the team, e.g. “The test site resulted in 30% more sign ups that the current site.”  Everyone gets that: engineers, marketers, and managers.  As an example, here is how one of our recent website experiments played out over a 2 week period:


In the chart, each day shows the cumulative conversion rate (i.e. total sign ups since the beginning of the test divided by the total visitors since the beginning) for the test site (Test) and the current site (Control).  Notice how well the test site is outperforming the current site.

However, anyone who’s played games of chance can tell you numbers that look good on this turn, may not be so hot on the next.  For example, if you flip a quarter 5 times and it came up heads 4 times, would you feel confident on betting that the coin is biased towards heads?   What if you flipped 80 heads out of 100 tosses?  At this point, you’d be much more confident that the coin is biases towards heads.  In our A/B test, we measure the conversion rate for a small subset of all visitors, let’s say 10,000 visitors with 100 sign ups.  Do we believe that the this conversion rate will be the same for the millions of visitors we expect in the months to come?  Do we need to test 1,000,000 visitors to be confident that the observed increase will apply to all visitors and was not just the luck of the draw?

Statistical Confidence
Statistician have figured out a way to calculate a numerical representation for the confidence that the population (i.e. the millions of visitors our site will see in the future) will show an increased conversion if the sample (i.e. the thousands of visitors that have hit the test site so far) shows an increase.  Though we have this reliable, albeit complex, formula for the confidence number (using a 2-proportion z-test, or an online calculator), explaining what this number means to the rest of my team hasn’t always been easy.  How would you interpret: “We saw a 30% increase in sign ups and we’re only 90% confident there is an increase.”  What this means is that if we ran this test 100 times, we’d expect in 90 cases to see an increase (though not necessarily a 30% increase) and in the other 10 cases to see a decrease or no change.  For some organizations, this would be enough confidence to make the test site the actual site for everyone, for others, it wouldn’t.  The decision of what confidence level to use comes down to a trade off of speed and certainty.

Unlike coin flipping, though, recreating the experiment over and over again would take too long and negate most of the gains we expect from A/B testing.  So it is difficult for some to internalize what this confidence level represents.  Many people, especially those that are risk-averse, don’t like dealing with probabilities and will keep asking for more data.  But you’ll never be 100% certain that the test site is better converting than the current site.  So at some point you need to stop collecting data and make a decision.

Sunrise Charts
What I’ve found to be a useful aid in getting many of the risk-averse types to accept some risk has been to overlay confidence areas in the time series chart like so:


My team has dubbed this a “Sunrise Chart” (yeah, I’ve never seen a green sky during a sunrise either, but you get the picture).  The solid black line and dashed blue line are the same as in the previous chart and the colored bands represent confidence levels.  If the test line veers into the green area we have a 90% level of confidence that the test site out-converts the current site.

Many of the less technically-inclined members of my team find that this chart makes sense on a more intuitive level than a statement like: “We saw a 30% increase in sign ups and we’re 90% confident there is an increase.”  The chart shows this same information, but it also shows two other things.  First, the random day-to-day fluctuations in conversion rate average out and the rates stabilize over time.  When people see more stable conversion rates, they are more inclined to feel confident in the difference they see.  Second, this chart shows that as we collect more data over time, a smaller and smaller increase is needed to reach a specific confidence level.  This is essentially the same piece of information as seeing the conversion rates stabilize, but since these confidence bands are generated from a complex mathematical formula, it gives some peace of mind that the underlying math is jibes with their gut.

To wrap things up, at PBworks we believe that A/B testing is an important tool to develop the most relevant software for our customers.  However, when experimenting, it is not enough to simply compare the conversion rates of the test site with the current site.  We want some level of certainty that if we do see an increase, it is not simply due to a lucky draw.   That is where confidence levels come into play.  Finally, and perhaps most importantly, it’s not enough for just the technically inclined to “get it” with a statistical analysis of the results.  Rather the whole team needs to be on board with the decisions that result from the experiment, so everyone needs to be comfortable with the analysis.  This is when Sunrise Charts can be a valuable aid.

PBwiki Single Sign-On 3.0

2 Apr

We’ve recently released the third generation of our single sign-on capability, which represents a significant increase in functionality.

Previous generations of PBwiki’s single sign-on let you authenticate people from a particular domain; in other words, if I ran an authentication server for, I could allow other people with identities to log into my individual PBwiki without creating their own PBwiki accounts.

However, we discovered that many of you wanted to authenticate people from more than one domain, and for more than one wiki.  This is a complicated problem, so we sent our resident genius and CTO, Nathan, off to build a solution.  He returned with Single Sign-On 3.0.

Now you can authenticate people from any domain to access any wiki that you control.

Since I am but a marketing guy, for the full details, I’ll turn you over to Nathan and Steven, one of our Support Gurus:

“Single Sign-On (AKA SSO AKA delegated authentication) allows you to build an authentication server that can use your existing user database/directory to help identify and verify users so they may have access to your company’s PBwikis. By doing this you eliminate the need for your users to register an account with PBwiki which in turn eliminates the need for them to remember another username/password.

In four steps the user can use their existing identity to log into into or Depending on the authentication server, you may also be able to set delays (wait period before logging in), access levels (reader, writer, editor, admin), and wiki access (wiki1, wiki2, wiki3 or all the wikis).

Here’s how it works:

1) Your user visits the wiki, and if not already logged in they’re redirected to your authentication server along with several URL parameters required to complete a login.

2) Your authentication server identifies the user and determines the wikis and access levels to grant.

3) Your authentication server redirects the user back PBwiki along with securely signed URL parameters which indicate to our servers who the user is and what permissions they should have on your wikis.

4) PBwiki verifies the URL parameters and signature, then creates a new user account if necessary and then grants the indicated permissions and issues an appropriate set of session cookies for the particular user.

This system, while highly secure, is quick and easy for end users and simple for an IT administrator to set up on your organization’s network. PBwiki has sample code available for a number of common programming languages and can refer independent consultants who have experience integrating customer intranets with PBwiki’s Single Sign-On features.”

For more details on SSO 3.0, you can refer to our documentation on delegated authentication.

New Feature: PBwiki Has Spellcheck!

18 Mar

Yes it’s been a long time coming but today we’re thrilled to announce — PBwiki now comes with spell check!

To double check your spelling, click on the spell check icon in your toolbar (beside the plugin icon). All misspelled words will be underlined with a dotted red line. Right click on the word to view suggested spellings.


We love feedback, let us know what you think!

Hello, from the new Web Analyst

21 Nov

I joined PBwiki last month as the first web analyst on the team.  One of my key roles here is to analyze how people interact with their wikis so that we can craft our products and services to best meet your needs.

We can monitor PBwiki to see what’s working and what’s not
One of the greatest facets of having a “software as a service” (SaaS) model is that we have an on-going relationship with our users and can observe how they interact with our product.  Compare that to the standard shrink-wrapped software model, where the vendor sells the product and then disappears from the customer’s sight until they want to sell an upgrade.  The benefit, of course, of us knowing how you are using our product, is that we can enhance the product to better suit your needs.

Case study: How many users use Document Management functionality?
Before the new features
As a concrete example of how PBWiki analyzes user behavior to improve the value of our product, let’s look at our new Document Management capabilities.  On any given day earlier in October, roughly 35% of active wikis were uploading files (see table 1).  This adoption rate indicates that you find document management useful and that we need to focus product development effort on it.  But at the end of the day, this number doesn’t give us much guidance in terms of what direction to take this feature.  To make any decisions regarding the product, we also had to look closely at qualitative data.  The quantitative data (i.e. the 35% adoption rate) lets our product team know what you’re doing, but the qualitative data lets us know why you’re doing what you’re doing.

Table 1 – Pre-feature enhancement adoption rate

Date Adoption
10/11 33.0%
10/12 34.3%
10/13 35.7%

After we released Document Management
After analyzing the qualitative data (e.g. user feedback), we realized the need for several new features (including access control), implemented them, and pushed them live near the end of October.  So, how do we know if these new features were useful?  We monitored the adoption rate and saw it jump over 10% (see table 2)!  The upshot of this example is that at PBwiki, we listen to our users so that we can build the best products to solve their needs.

Table 2 – Post-feature enhancement adoption rate

Date Adoption
10/29 41.3%
10/30 41.1%
10/31 40.0%

Web analytics and your privacy
We take your privacy seriously at PBWiki.  As a reminder, the non-binding English summary of our privacy policy is

  • If you mark your wiki private, we’ll keep it private.
  • We don’t share personally identifiable information with others.
  • We hate spammers, too. We’ll try not to bug you with email.

During any analysis, I will be sifting through the 1 billion events that our users have generated over the past few years.  Because of the immense size of our data set, I work with anonymous and aggregate data.  In the analysis of Document Management above, I included over 100,000 wikis and at no point did I need to drill into the specific details of any one wiki or user.

What kind of data would you like to see?

PBwiki cruise lines

4 Sep

When you think of taking a vacation cruise, you probably think of all the food you’ll eat, shuffle board you’ll play, and booze you’ll drink. But what do the folks running the ship think about? My bet is they think about some of the same things I think about everyday: keeping the ship operating smoothly, charting a halcyon course, and making sure that process never enters the minds of its customers. Come with me for a tour of the ship we pilot for the pleasure voyage we like to call

Surf\'s up on PBwiki traffic

“Safety first” isn’t just the mantra of cruise liners and middle school crossing guards, we take it seriously here, too. Your data is kept on three different PBwiki machines, then additionally encrypted and backed up off-site. How much data are we talking about? Your average desktop computer can hold about 200GB of data, of which about 6GB is your illegal music collection. We track over 25 times that amount: 5400GB of your data. In the past year we’ve had to triple the number of servers we use to store it all!

Engines are pretty important to cruise ships, but they’re also complicated and can break down. Putting in multiple engines is difficult and expensive, but it’s worth it: if one breaks down, you’ve got a spare. PBwiki is the same way with computers. Over the last year, we’ve worked to add “hot standby” servers that automatically take over if another computer experiences a failure. Ever wish you could just switch computers when Word or Windows crashes and pick up where you left off? With PBwiki you can!

Expanding RAM and capacityCaptains don’t drive a ship blindfolded, and neither do we. Earlier this year we fully instrumented our machines and services with a program called “ganglia.” It takes measurements and displays them on our dashboard so we can detect problems and calculate trends. The graph at left shows the effects of adding RAM to a beleaguered backup database: CPU usage drops and we are even more prepared in the unlikely event of a problem.

Although calling it a “captain’s log” would evoke too many Star Trek jokes, our Operations team logs all changes to the service, so we have a point of reference when tracking down performance issues, or to make sure certain checks were made. We keep it on PBwiki itself and simply call it the “log.”

Of course, this metaphor only goes so far: I haven’t yet secured the right to use deadly force to suppress piracy and mutiny. Apparently that would be against the “laws” and we haven’t relocated to my ideal office in international waters. Join me next time when we go into more technical details about PBwiki’s commitment to operational excellence!

Search 2.0: Now Better, Faster, Stronger

28 Jul

Mmm, dogfood. Here at PBwiki, we make use of our own product pretty extensively. And having used an internal PBwiki for some three years, we’ve accumulated a pretty large collection of material! This makes it ever more important that we be able to search it and find what we’re looking for. So we’ve dramatically overhauled (and improved) search.

You may have noticed when PBwiki search improved a few weeks ago, getting phrase search, boolean inclusion/exclusion, and filename matching. Well hold onto your pants, because you ain’t seen nothin’ yet.

Search has been dramatically restyled again:

  • Information about who last edited a page and when
  • Suggesting page names similar to your search query
  • An adding helpful icons to the result list to let you quickly distinguish wiki pages from PDFs
  • Giving up to 200 characters of “snippet” context (vs 80 previously)
  • Ranking search results and interleaving results from wiki pages, discussions, and filenames
  • Letting you drill down to only search pages with a given tag or in a certain folder

OLD Search:

NEW Search:

Coming very soon? The ability to search inside PDFs, Word DOCs, Excel files, PowerPoint, and more.

If you have feedback about this new feature, what you like and don’t, please chip in here and I’ll read every comment! 🙂

The Universal Edit Button

20 Jun

When the web was first conceived, it was intended to be a read/write medium. Right now, wikis are the best implementation of that vision: web pages that are easy for people to edit directly from their web browser.

Yesterday, we heard about the Universal Edit Button from a commenter on Get Satisfaction. When we looked into it, we thought it was a great idea: the Universal Edit Button is a Firefox extension that adds an “Edit” button right in your browser on pages that support it. We are happy to say that we now support this great idea, joining a growing number of wiki providers online. You’ll see it on all PBwiki 2.0 pages (PBwiki 1.0 will get it on Monday) that you have writer access to — it’s a pencil in a green square:

Right now, you have to have to be using Firefox to use the extension, but people are working on a version for Internet Explorer. Keep up to date at the official Universal Edit Button web site.

PBwiki needs alpha testers

25 Apr

Feeling adventurous? We’re building a team of PBwiki users to help us test out new features before they go live for everyone. We need about 100 testers who know their way around a wiki and aren’t afraid of the cutting edge.

If you’re interested in trying out new things ahead of time, don’t mind the occasional glitch, and are interested in giving us feedback, you can apply here. We can’t wait to hear from you.