Early in my career I believed that Tech Writers were a interface role to Marketing and Sales. As a product focused developer, I didn’t understand the true value of Tech Writers beyond cleaning up content to make it customer friendly as in the places where I worked this is the role they most often took. I now see them much differently and I wanted to write a bit about what changed my mind.
For me the big turning point was at Safe Banking Systems when we embarked upon SOC2 compliance. This process involves creating a mountain of documentation and so to facilitate this we included a seasoned tech writer to focus on the project. It was in working together that I realized the depth of the value of what a good Tech Writer brings to the table.
Maybe the most important facet of Tech Writers is that they can get information out of the heads of tech folks who would otherwise not write documentation, due to lack of time or other “reasons”. A Tech Writer will sit with them and interview taking extensive notes, and then write up the results. The interviewee then only needs to review and tweak their work for accuracy. This removed a ton of friction from the process and we ended up generating a lot more documentation and much more quickly.
For those who do write documentation, Tech Writers can free up your developers to get back to thinking about hard problems and writing code. Are there any parts of your process that require extensive documentation on a regular basis? Working a full day to make a nice document becomes an hour or less to get the information out. A good Tech Writer can get this work done much more quickly than a typical dev. This similarly removes friction and leads to more documentation actually getting written.
When we had completed SOC2 our seasoned Tech Writer stayed on board, and her value only grew as she better understood the product and the team. We generated more and more documentation and, as you probably have seen, eventually just keeping the documentation organized became a problem. Thankfully one of our her naturally evolved to provide organization and structure to all of this documentation that would otherwise have become a big mess. One hard won lesson here is that without some way to provide this structure it doesn’t matter how good your documentation is, because no one will be able to find what they’re looking for anyway.
Tech Writers also can facilitate the updating and regular review of documentation by to ensure accuracy. Before we had tech writers our documentation was in pretty bad shape, often written by more junior people as a way to learn, and it was rarely updated. At first they helped us ensure product changes were making it into the docs, but also as we grew they helped us implement processes to ensure freshness and compliance.
Finally, Tech writers can facilitate important writing projects that you otherwise would put off or leave undone. When I led the Model Validation Working Group our goal was to help banks better understand and integrate our machine learning driven technology. None of our competitors had similar technology and so it was all quite alien to our customers, and in a regulatory domain like accounts and payments screening thus required a whole lot of explanation. I knew we would have to build a massive document detailing every facet of integration, customization, tuning and maintenance. With two Tech Writers we were able to complete this project in about six months. Without them it never would have happened.
Tech Writers are often overlooked because of their nature as a supporting role. It’s harder to see their impact. They can however make the hard easy and sometimes even make the impossible possible, and so shouldn’t be overlooked. I now believe every team over a certain level of maturity should have at least one Tech Writer to support their team. If you don’t already have one on staff, consider hiring a Tech Writer to facilitate your own team’s growth.
I wrote this post with fond memories of working with Cindy Orwig and Abby Friedman, both of whom I worked with at Safe Banking Systems. Abby runs Marketing Solutions Consulting, and is available marketing and tech writing work.
Sandy was a brilliant kid. School was a matter of course for her, she got mostly As without any major effort. What made Sandy special though was her affinity for music, and she truly loved it and it came to her naturally. Even as a middle schooler she would spend hours each day practicing her keyboard and guitar. It wasn’t hard work for her, it was her dream, she was drawn to it naturally.
Through high school people called her a musical genius, she was far above other students in her small school. People would tell her stories about how successful she would be in the future, they would tell her just how special she was. Her parents and teachers praised her endlessly. She got into a prestigious music university with little effort, with numerous recommendations from her high school teachers.
Now in University, Sandy definitely noticed the playing field had changed. She was still one of the best, but only one of them. There were others of similar dedication and skill. While she felt less special, she made fast friends with the other talented kids, formed bands, and even sold some music for real money online. It was at this point that she decided she wanted to be a professional musician in the music industry. The idea was set and she wouldn’t be dissuaded.
In University Sandy found she had a weakness in composing new music. It was the first time in her life she had ever really struggled. It never came out the way she wanted. When she compared her compositions to the best in her class they were mediocre at best. This really hurt, she wasn’t used to being second rate.
Time flies and soon graduation day came. Sandy still felt she was special, and wanted more than anything to work as a professional musician.
Sandy moved to New York, and shared a small apartment with four other hopeful kids. By day they all worked service industry jobs, like waiter or cashier. By night each had their own dream they would chase. There was a comedian, a dancer, a off Broadway actor. Each like Sandy felt they had a special talent, and each worked very hard to make it.
Sandy was in several bands, but none felt truly special. She enjoyed playing the live shows in NYC but year on year the long days were wearing her down. She had seen some lucky ones join bands that made moderate success, eventually enough to make a real living at it, but they were few and far between.
Roommates come and go, and a rare few of those had seen success as well. There was that one comedian who had opened for Jerry Seinfeld at The Stand, a dancer/actor who had gotten a prime role on an off broadway play. Only one had really made it, an actor who had gotten a reoccurring role in a popular TV show. Most ended packing up and going home back to where they came from.
Years rolled by but Sandy was unwilling to give up, she started a side business teaching music lessons to hopeful kids to help pay the bills. It started to eat into her live performance time but she was just sick of living three to a bedroom in New York.
By the time Sandy was 30 she had made quite a name for herself in the nightclub scene in New York. She had a lot of friends, and between her side job teaching and now managing a Dunken Donuts, she was able to afford a studio apartment all to herself. But she felt empty, this was not the success she felt she was destined for. That wide eyed kid she was thought she would take over the world.
She lived the rest of her life out at about the same way. Never breaking through to the success she wanted. She never got the winning lottery ticket. It wasn’t a wasted life by any means, but she never seriously considered any alternate paths around her and it led her to a narrow cutoff where very few even very talented people make it through.
What could Sandy have done differently?
- She could have gone to graduate school and taught music in university or high school.
- She could have become an online music persona. Started a Youtube channel and twitch stream, teaching or just playing.
- She could have taken programming classes and worked on music software.
- She could have pushed the boundaries in innovative computer generated music.
- If she explored more, she could have done something completely different. Maybe she had a innate talent for graphic design but she never tried it.
But she never set herself up for any of these opportunities, she never explored outside of music and developed synergistic skills. She assumed she would follow the normal path and success would come eventually. Unfortunately in these kinds of cases where a lot of people are striving for the same thing, it’s not just about talent or skill, there’s a good deal of luck.
The rise of tooling for vulnerability detection combined with pressure driven by Vendor Due Diligence is causing a massive enterprise freezeout for non-mainstream technologies across the board. Of particular concern is the impact this will have on the adoption of functional programming in enterprise and small business B2B development.
I see now that the last 10 years were “easy mode” for the growth of new programming tools and infrastructure, with many new breakthrough technologies seeing rapid adoption. Languages like Node, Go and to some degree Scala saw breakaway success, not to mention all of the new cloud tech, NoSQL tech, containerization and data processing platforms along with their custom query DSLs. Other languages like Haskell saw success in small companies and skunkworks style teams solving very difficult problems.
The Rise of Vulnerability Scanning
Just this past year I’ve come to see we’re in the middle of a massive change across the industry. There are new forces at play which will calcify current software stacks and make it extremely hard for existing or new entrants to see similar success without a massive coordinated push backed by big enterprise companies. This force is the rise of InfoSec and vulnerability detection tooling.
Tools like Blackduck, WhiteSource, Checkmarx, Veracode are exploding in popularity, there are too many to list and many variations on the same theme. In the wake of so many data leaks and hacking events enterprises no longer trust their developers and SREs to take care of security, and so protocols are being implemented top down. This isn’t just on the code scanning side, there is a similar set of things going on with network scanning as well which impacts programming languages less, but similarly will calcify server stacks.
These tools are quickly making their way into SOC2 and SDLC policies across industry, and if your language or new infrastructure tool isn’t supported by them there’s little chance you will get the previously already tenuous approval to use them. This sets the already high bar for adoption much higher. As you might expect, vendors will only implement support for languages that meet some threshold for profitability of their tools. Not only do you need to build a modern set of tools for your language to compete, now you also need support from external vendors.
Vendor Due Diligence
Maybe we just cede this territory to enterprise tools with big backers like Microsoft and Oracle, we never more than a few small inroads anyway. The use of these tools is arguably a good thing overall for software security. Unfortunately, the problem cannot be sidestepped so easily, and I’m afraid this is where things look very bleak. The biggest new trend is in enforcement of these tools through Vendor Due Diligence.
You may not be familiar with Vendor Due Diligence if you aren’t in a manager role. The basic idea is your customer will send you a long list of technical questions about your product which you must fill out to their satisfaction before they buy your product or service. In the B2B space where I work these lists are nothing new, but have been getting longer and longer over the last 10 years, now often numbering in the hundreds of questions.
Most recently I’ve seen more and more invasive questions being asked, some even going into how teams are organized, but important to this article is that across the board they now all ask about vulnerability scanning and now often request specific outputs for well-known vulnerability scanning tools. The implication being that if you’re not scanning with these tools they won’t buy your software, and the list of supported languages is small.
Any experienced technology manager sees the natural tradeoff here. When it comes down to making money versus using cool tech, cool tech will lose every time. You’re just burning money if you’re building cool things with cool tech if you know no one will buy it.
So What Now?
Potentially we will see a resurgence of “compile-to” functional programming with mainstream language targets to sidestep the issue. I suppose though that the extra build complexity and problems debugging will prevent this from ever being mainstream, not to mention that the vulnerability tools look for specific patterns and likely won’t behave well on generated code.
There is some hope in the form of projects like SonarCube which enables users to come together and build custom plugins. Will functional programming communities come together to build and maintain such boring tech? I somewhat doubt it. This kind of work is not what most programmers would choose to do in their off time. Similarly, vulnerability detection is unlikely to be a good target to be advanced a little at a time with academic papers. It would take true functional programming fanatics to build companies or tools dedicated to the cause. If you are interested in helping out, pay attention to the OWASP Top 10 as this list drives focus for many infosec teams.
Where does this leave us? If our communities do nothing then smaller B2B software operations focused mom and pop shops or consumer focused web applications likely won’t see any impact unless static analysis makes it into data protection law. Beyond these use cases FP will be relegated to tiny boxes on the back end where vulnerabilities are much less of a concern and the mathematical skills of functional programmers can bring extreme amounts of value.
I know there are many deeper facets I didn’t cover here, if you want to continue the discussion join the thread on twitter.
There is some controversy as to when Scrum was invented, but many attribute it to Hirotaka Takeuchi and Ikujiro Nonaka in 1986. While still new and cutting edge to many companies this 30 year old process has it’s fair share of both proponents and opponents. Seeing as how I started programming at about this time under the watchful gaze of my stepfather I’m going to take a look back over my own career and speak to how my perception of scrum has matured as I went from solo-developer to working in teams and more recently to leading teams myself.
As a solo developer your process is your own. Your organization and motivation (or lack thereof) is what makes or breaks your work. It’s easy to develop a lot of bad habits, and these can be hard to break when joining your first team. There’s also great pride to be found in building things yourself from scratch, but this itself can be a pathology leading to a defensive “I am my code” kind of perspective. In the late 90s and early 2000s I had never heard of Agile, Scrum or Kanban. I managed to get by with piles of haphazard notes covered in scribbles usually jumping from one half-baked idea to the next with no regard for the big mess I was creating. I still managed to make things, but I hate to think of how much of my early career was spent rewriting the same thing over and over from scratch because I had coded myself into a corner yet again.
I had my first experience working on a real team fresh out of university at Atalasoft. When I first joined each developer was a silo, rarely interacting with the others. Surprisingly, this largely worked as thankfully the company was founded on experienced people and had zero turnaround. It also helped that they had proper infrastructure: source control with continuous integration and bug tracking with FogBugz. But the company was growing, and starting to hire junior people out of the local University (such as myself), it was quickly becoming obvious that something needed to change.
My first real assignment after coming on and getting oriented was attack the huge backlog of bugs in our tracking system. These bugs spanned every silo across the company and were in at least four different languages. At first it was difficult working with people who were not used to having outsiders poking around in their code. There was defensiveness and sometimes even anger. It didn’t help that I was pretty arrogant, thinking I was hot stuff with my almost perfect CompSci GPA. Nothing brings humility like working in a large legacy codebase however.
I came to appreciate when people had written tests, even if the coverage was poor, they were like signposts on the road. The worst code to work in was always the untested but I was able to move forward with a copy of Michael Feathers’ Working Effectively With Legacy Code largely guiding my sometimes painfully slow progress.
As a side note, we at one point attempted TDD but it slowed development to a crawl and had only a small impact on bugs per line of code after a release cycle. The much more effective approach we landed on later was to have a sprint where we tested each other’s code. This became a kind of contest to see who could break each other’s stuff in the most horrible way and testing became great fun. I would look forward to this sprint more than any other.
About a year after I joined the decision to adopt Scrum and Agile was made and many (probably most) people were unhappy about the change. At the time I was particularly unhappy about the 9am daily meetings (which I was almost always late for). I think this decision was largely a response to the difficulties we were having with the productivity of newer hires. The silo model was breaking down as junior programmers were being brought on.
At first we struggled with all of the planning and meetings, but after a month or so we were back at our former level of productivity, and within six things were much improved. Junior devs were being given appropriately sized pieces of work. There were multiple people successfully working on the same bits of the code base. There was still some resistance, and there was a lot of pain around letting go of code ownership, but it was largely working.
Under this system I was able to grow as a developer in great strides. They tested me with larger and larger pieces of problems at a time and within two years of joining Atalasoft I was designing whole new products. This was only possible because we had a system in place where it was obvious if I was stuck, and a system in place to help me decompose problems if I needed it. By the time I left Atalasoft I was running the Scrum meetings myself.
Today I don’t use Scrum in running my research department. We are a small collection of experts working largely (but not entirely) in silos on research problems. Scrum would be too much structure for what we’re trying to accomplish. However, I wouldn’t hesitate to reach for Scrum again if I were building a team to make products or if I had a large proportion of junior developers. I know of no faster way to grow a junior hire into a full-fledged software developer.
Every blog post I’ve read about using F# with SQL CLR involves turning off SQL CLR security for the entire database like so:
alter database mydatabase set trustworthy on
This is because there is not currently a “safe” FSharp.Core, and so even when building with –standalone you must load the assemblies in to SQL as “unsafe”.
In our business we deal with real live bank data, and I would never be able to sell this to our ops team. Fortunately, there is a way to allow unsafe assemblies on a per-assembly basis as I found out here. It’s a little more complex, but not awful.
I’m going to gloss over some of the wiring things up code here, as it can be found in many other places.
1) As with any SQL CLR, you need to turn it on before you can use it
SP_CONFIGURE ‘clr enabled’, 1
2) You must sign your assemblies to-be-loaded with an asymmetric key pair.
In F# you do this by adding an attribute to your assembly. I like to put these kinds of things in my AssemblyInfo.fs. You can find out more information on how to create a public-private key pair here. I also recommend compiling with the F# –standalone flag, so as to avoid having to pull in FSharp.Core as well.
3) Make sure you’re in the Master database.
4) Pull in that asymmetric key into SQL Server and give it a name.
CREATE ASYMMETRIC KEY FSHARP_CLR_Key
FROM EXECUTABLE FILE = ‘C:\MyProj\bin\Release\FSHARPCLR.dll’
5) Create a login linked to this asymmetric key. You also need to give this user external assembly access.
CREATE LOGIN FSHARP_CLR_Login
FROM ASYMMETRIC KEY FSHARP_CLR_Key
GRANT EXTERNAL ACCESS ASSEMBLY TO FSHARP_CLR_Login
6) Move to the database where you’ll be deploying your assembly.
7) Make a database user to run under.
CREATE USER FSHARP_CLR_Login
FOR LOGIN FSHARP_CLR_Login
8) Pull your assembly into SQL!
It’s still unsafe, but in this case it’s special dispensation for a single assembly with controllable permissions instead of whole-hog access.
CREATE ASSEMBLY FSHARP_CLR
9) Wire up the API with CREATE FUNCTION or CREATE PROCEDURE calls as you would normally.
10) And now we can call it easily in SQL!
SELECT dbo.StringDistance(‘Richard’, ‘Rick’)
Please let me know if there are any future incompatibilities.
I don’t usually review hardware here, but I think this device stands out as being particularly useful to people who take a lot of notes and/or read a lot of research papers.
I read about the Sony Digital Paper DPT-S1 for the first time about a year ago and couldn’t help but be impressed. It promises the ease of reading of e-ink combined with a size that is amicable to academic papers and on top of that allows you to actively annotate the documents with a pen as you read them. It also sports the usual e-ink 3 weeks of battery life. Luckily enough, I managed to get one of my very own right before Lambda Jam 2014 and so had the perfect opportunity to give it a spin in a real use case kind of setting.
Reading and marking up papers in PDF format is where this device shines.
You simply swipe to turn pages, and it works every time. There’s even pinch zoom. The screen is large enough that you can easily read an entire page without zooming, as was always the problem I had with my first gen e-ink kindle (also the DPT-S1 weighs substantially less). You even get multiple tabs in a workspace so you can swap between different documents quickly for cross-reference.
In this context it’s a better Kindle DX (now discontinued) that you can take notes on. For me (and for many others I suspect) reading a paper is a very interactive experience. You want to be able to highlight the important parts and even scribble in the margins as you move through it. The DPT-S1 supports this better than any device I have seen yet.
As you can see here you can not only write directly on the paper, but you can also highlight. These are both done with the included stylus for which the standard function is writing but changes to highlighting if you hold down the button on its side. You may also notice the little boxes in the margin of the text, these are collapsible notes.
As you can see, the darker square in the top right margin is here opened and available for writing. Also please note that these notes were taken by me (with horrible handwriting in general) on the way to Lambda Jam, in a squished economy seat of an airplane, while there was some mild turbulence. While the hand writing isn’t paper-perfect it’s much better than other devices I’ve used in the past, including the iPad.
One of the best features of the DPT-S1 is also its most limiting: It’s designed to work only with PDF files. The big benefit of this is that all of these writing annotations actually turn into PDF annotations on the given file. This makes them extremely easy to export and use in other contexts.
The other big use case I had in mind for the DPT-S1 was taking notes. I always carry a notebook of some form and over the last three years I’ve managed to create quite a lot of non-indexed content.
I usually carry one notebook for notes/exploring ideas, another for planning things (like to-do lists and such), and finally one small one for writing down thoughts on the go. This stack doesn’t include notes from talks I’ve attended or my Coursera class notes. It also doesn’t include the giant stack of hand annotated papers in my office, but that’s more to do with the previous section.
I took pages and pages of notes on the DPT-S1 at Conal Elliott‘s talk at Lambda Jam (great presentation by the way). Here’s a side by side comparison with some paper notes I’ve written in the past.
As you can see, my handwriting isn’t great as I tend to go kind of fast and sloppy when not looking at the paper, but the DPT-S1 holds up rather well. I think it would do even better for someone with nicer handwriting than I.
There is one somewhat annoying downside, and that’s that when you make a new notebook pdf to take notes it only has 10 pages and you have to give it a name with the software keyboard input (it defaults to a date and time based name). This slowed me down big time in the talk because he was moving very fast toward the end, and that’s precisely when I ran out of pages. Still, given how well polished the rest of the device is it’s something I can overlook.
Browsing the Web
The final use case for the DPT-S1 is web browsing. This isn’t something I really need as my phone usually does a pretty good job at this, but it could be nice to have for reading blogs and such so I’ll touch on it.
My blog actually renders quite well and is very readable, you can scroll by swiping up and down. Pinch-zoom works here too.
I went to several sites and they all worked well enough, but given that this device is WiFi only I don’t expect I’ll be using it much for reading blog posts on the go.
If you’re looking for a cheap consumer device that you can easily buy e-books for you should look elsewhere. It’s expensive (~$1000 usd), hard to acquire (you have to email and talk to sales agents), and has no store, no API (only the filesystem), and only supports PDF.
However, if you’re like me in that you take a lot of notes and you read a lot of papers, and you don’t mind spending a bit of money on something to solve a major problem in your life, this is by far the best device on the market for your needs.
Please note, that while they are available on amazon, it’s the imported Japanese language version. Currently the only way to get an english version DPT-S1 is through contacting the sales team at WorlDox.
It’s been a great year for F# with the blossoming of the fsharp.org working groups. It’s been amazing watching the community come together to form a movement outside of Microsoft. This is most certainly the long term future of F#, protected from the whims of layer upon layer of management. Who knows, in the coming year we might even see community contributions to the F# Core libraries. Who would have thought that would ever have been possible?
I’m very happy to see that Sergey Tihon has maintained his wonderful weekly roundup of F# community goings on. It’s a big time investment week after week to keep the weekly news going. After leaving Atalasoft, and no longer being paid to blog on a regular basis, I found I couldn’t keep investing the time and felt very badly about not being able to continue my own weekly roundups. Sergey has picked up that mantle with a passion, and I’m so very glad for this extremely useful service he provides to the community.
Meanwhile Howard Mansell and Tomas Petricek (at his BlueMountain sabbatical), worked toward building a bunch of great new tools for data science in F#. The R Type Provider has become extremely polished and while Deedle may be fresh out of the oven, it already rivals pandas in its ability to easily manipulate data.
At Bayard Rock Paulmichael Blasucci, Peter Rosconi, and I have been working on a few small community contributions as well. iFSharp Notebook (An F# Kernel for iPython Notebook) is in a working and useful state, but is still missing intellisense and type information as the iPython API wasn’t really designed with that kind of interaction in mind. The Matlab Type Provider is also in a working state, but still missing some features (I would love to have some community contributions if anyone is interested). Also in the works is a nice set of F# bindings for the ACE Editor, I’m hoping we can release those early next year.
Finally, I wanted to mention what a great time I had at both the F# Tutorials both in London and in NYC this year. I also must say that the London F# culture is just fantastic; Phil is a thoughtful and warm community organizer and it shows in his community. I’ve been a bit lax in my bloggings but they were truly both wonderful events and are getting better with each passing year.
That right there was the highlight of my year. Just look at all of those smiling functional programmers.