*This post is a part of a series, vignettes really, meant to capture the work I’ve been up to in Sri Lanka this December & January and my own thoughts and impressions of the changes I’ve seen and the people I’ve met. Check back here daily for more :)
We were heading up to Vavuniya and Kilinochchi to do a series of subscriber workshops — workshops that would allow us to talk with potential users about our free service and then subscribe those interested to the GrowLanka system. We had left for the north at 2:00 am. At about 5 am, after a bumpy car ride of me putting the last minute touches on our preparations – putting together our workshop signs, little pamphlets, a banner, going over logistics one last time – I looked up from my computer in panic. Among my bevy of last minute checks, I’d done one last system check: I’d tried to register as a pseudo job seeker. But no text message was coming back; I wasn’t even sure if my text was going through. I tried ten more times. And then really started to panic. We needed the system to work — if it didn’t work how were we going to subscribe all the people we were set to meet? Not just that – this was an incredible disappointment. How would we convince people of the merits of our system when it wasn’t even working? And the time we were wasting? The money we were spending to go on this trip? My mind started to front wheel through all the options.
At 6:30 I called the team that helped to develop GrowLanka and frantically explained the situation. They told me to calm down. Our SMS gateway was based in South Africa. It wasn’t even dawn there yet. I thought to myself: we need a back-up. I asked one of our developers if there was any way to develop a wifi-enabled gizmo to register people using my computer. It was a temporary fix but if we could get them into the system and auto-send them a message, we might lose them completely. He got to work.
By 10 am, our back up plan was our only plan and we were going with it. The folks in South Africa were still sleeping. Instead of getting people to sign up via SMS, we had no choice but to go the old fashioned route: I printed up a massive amount (somewhere on the order of 1000) subscription cards. Instead of getting people to subscribe instantly via text, we were going to have to get them to spend the 30 seconds to fill out this sheet and then I was going to have to manually enter them into the system. It was in my mind a massive technical fail. One that I think is worth looking back to — because in the realm of user products and technological solutions it wasn’t just a massive fail because it was inconvenient. Quite frankly, as a back up plan goes, our subscription card-method was probably easier to explain to new users and enabled us to subscribe far more people more quickly than what we would have originally been able to. Still even as the system (thankfully) was up and running in a day, there were major lessons to be learned. Here’s just a few:
1. Double, triple check any technological system at least 2-3 days in advance of deployment. The last systems check we had run was on the Monday prior. There’s a real chance that the technical glitch happened at 4:55 am on that Friday but I’ll be honest – I wouldn’t have had any idea if that was the case. Because I hadn’t checked. If I had covered all my bases say on Wednesday I would have had more of a time window to default too – in this case, I could only give our developer less than 5 hours to remedy the problem. It worked this time but that’s never a guarantee.
2. Make (actual) checklists. And triple and double check them too. I’m a list maker – don’t get me wrong. But I’ve been working on GrowLanka for so long, that all my checklists are mental. Logistics. System. Workshops. Accommodations. I reel through these in my head quickly and often just before I head out. Problem with this is mental checklists don’t leave much room for someone else to double check and make sure you’re not screwing up. Moving forward, our team will be sharing our checklists and everyone of us will be checking them. System failures fall on everyone’s shoulder– so everyone should be checking and everyone should be ready to react quickly when things go wrong.
3. The healthcare.gov problem. A lot of people talked about how the failure of the original healthcare.gov website was problematic because people couldn’t get signed up. If they couldn’t get signed up on their first visit to the site, users — the critics argued– would be unlikely to visit again meaning the Obama administration had blown their one good shot at getting people subscribed. How many people would end up not subscribing because of this technical glitch was a number that would never be known. Going through our own mini “healthcare.gov crisis” I started to realize the bureaucratic challenge that exists when rolling out major new technological systems and devices. The critics raged after the failed launch of the website: how could Obama not have double-triple-checked this system when his administration knew how big a deal this rollout was, when it was such a legislative and administrative priority for him? But you see when you’re overseeing so many different things — the administrative rollout, the locational distribution, the training of personnel, the ensuring of new user resources, etc. — you’re probably more likely than not to delegate the responsibility of maintaining and checking the actual system to the people who developed it. That’s what we did. And that was a massive failure on our part. Checking the system should have been a priority – not an assumed check on our to do list and EVERYONE on our team should have been charged with it as their number one task regardless of whether they were a developer or a logistics coordinator or a finance manager. Chief responsibility should have been with the developer (you can’t make everyone responsible for everything after all – then who’s actually responsible for getting things done) but that is not an excuse for everyone not taking the 5 minutes needed to do a quick, daily systems check. The more people checking, the better.
4. You have to give users an experience. In not being able to provide users at subscription with an accurate user experience that allowed them to first interface with the system via phone, we curbed our ability to really ensure that new subscribers understood how exactly the system functioned. That said, one of the key components of our system that made it so unique was the very fact that a user could subscribe via text. He could do everything he needed to with the system via text. To not have that functionality up and running on our first day precluded us from developing on Day 1 user trust in our system. And I’ll be real with you, that sucks.
The beauty of GrowLanka also exists in that we had a vision of the system creating a demand for its services– a sort of self-sustaining means for growing its own market. If it took off and people found the system to be helpful, we figured they would tell their friends and family and people they knew. We’d have essentially trained the initial user to be a mini workshop interlocutor — having already subscribed to the system under our guidance, they could easily tell their friend: hey it’s easy, it takes 30 seconds to subscribe and like 4 steps and you get this great service to boot. And these community members we figured would be far better ambassadors for GrowLanka than we would ever be because a) they spoke Tamil and b) people in the community would trust them. Problem is, with our system out on the first day of workshops, we had double the work — we had to teach people how to subscribe instead of showing them and we had to convince them that this service a) worked and that b) it would be valuable. We debilitated our own chances that day at selling GrowLanka.
4. Listen to the community – be serious about incorporating their feedback into your process. We were out in the community subscribing primarily women and farmers to the system vis-a-vis 30 minute workshops that we had pre-arranged and scheduled through our partner organization Sevalanka. Sevalanka is an aid organization working in the north that is trusted by the community. They believed in our service and have served as an instrumental partner to us since our freshmen year. The people at Sevalanka have a very intimate understanding of the communities in Northern Sri Lanka and have an acute sense of the people’s needs there. Even more – they have what they call “community mobilizers,” volunteers who work to get information out to the communities. So – in short, the folks at Sevalanka made it such that these pre-arranged workshops were a good plan and they were effective. But what we noticed on site when we were registering women from a little village outside Vavuniya is that they would ask more often than not to subscribe their kids. This observation was complemented by something one of our partner employers had told us – “you know who would really love this service? young people.” There’s a reason – overall unemployment may be at about 4.5% in the country but in the north that figure is much higher and for young people that number is in the double digits. Newspaper headlines talk a lot about how this milennial generation is more educated even as less of them are employed. Our system, to be frank, was designed with a very different user in mind – the uneducated, underserved peoples in the village who had less access to resources. And yet when MAS suggested that we head over to the University of Jaffna and see if we could sign people up there, it didn’t seem like half a bad idea. What unfolded there, I’ll leave for the next post. But the takeaway is this: in designing these solutions, let the community you’re working in – not your own assumptions – inform who and how you serve. In our case, it turned out that our system might be attractive to an entire subset of the population that we hadn’t even considered!
5. Repeat loudly after me: field work never goes as planned. My best advice to people working on service projects like GrowLanka – anticipate the worst. Make contingency plans. Role play scenarios. And try and stay on the ground as long as possible. Inevitably, the skill it takes to respond to crises in real time is one that takes experience and the one you’ll probably draw on most when things go wrong. That being said, if you have some vision for a back up in your mind, you’re better off – if not for the only reason that having created that plan in the first place would have forced you to think about all the things that just might go wrong and how you can avert them. And stay on the ground longer since inevitably you will start to get in a rhythm. I was on the ground in the north for a month. The technical fail I talked about was a problem for a day. After I had led more workshops, I knew the kinks in our process and resolved them. And then I had 20+ more days to go out and get people subscribed. Giving yourself a lengthy runway to iterate, to user test, to deal with administrative and technical fails can mean the difference between successfully deploying a system and not. Not to mention, returning to a region and spending a lot of time there helps a bunch when it comes to establishing a relationship predicated on trust with the people you work with and serve on the ground.
Up next: Crashing Convocation