August 20, 2016
User Interfaces Suck
A little while back, there was an article in the Verge discussing Microsoft’s long-term thinking around the mobile internet experience. The particular point in the article that caught my eye was that Satya (CEO) and Qi (VP of ASG) believe the future of mobile internet lies in what they call a “conversation canvas.” The premise is that today’s model for interacting with the Internet on mobile devices is fundamentally flawed, leaving users generally unhappy and doing the majority of their Internet browsing on conventional desktops.
The idea is that the “conversation canvas” will allow a more natural way to interface with your mobile device and increase engagement on mobile platforms. Much of this idea is based around improvements in AI and natural language processing, allowing users to interact with their devices in a more natural way. While I don’t know how I feel yet about talking to Siri or Cortana or Alexa (what does Google call theirs?), I recognize the value in the idea and am quite pleased that smart people in my own company are as convinced as I am that user interfaces could be so much better.
Now I said I’m not sold on the “natural” interface by itself — but the article got me thinking about what I dislike about user interfaces. Windows, macOS, and *nix share the same general model for desktop user interfaces. Android, iOS, and Windows Phone share the same general model for mobile user interfaces. These models work, but they’re not great. Depending on how you feel, they’re anywhere from bad to slightly above mediocre.
So if the answer isn’t today’s model and if I’m not convinced it’s necessarily the “natural” interface, then what is it?
Think about how you interact with your computer today. It’s primarily text manipulation. You write emails, papers, text messages (SMS, Facebook, WhatsApp, etc), search terms, notes, and so on. If you’re not writing, you’re reading because that is the human mechanism that is most common for conveying information. For simplicity’s sake, let’s not think too hard about graphic artists or gamers (I consider these closely tied in interface models, especially with the advent of VR). They are an interaction model that has more in common with the “conversation canvas” mentioned above and I don’t know enough about the space to babble meaningfully about it.
History Detour: Once upon a time, there was a computer known as the Canon Cat. This machine was designed by Jef Raskin, the famed designer of the original Macintosh computer. It had a novel text-only user interface combined with a special keyboard. The keyboard combined the standard QWERTY layout with a pair of meta keys1 that allowed you to invoke meaningful commands that interact with the text user interface. The beauty of this system was the speed with which one could navigate it and query information - for example looking up the definition of a word on screen or bringing up a list of adjectives for a highlighted word. Extrapolate that to programming which isn’t all that different from writing and you’ve got a very productive environment.
The problem with our user interfaces today are that they’re not high fidelity enough with the mediums we interact with. The “conversation canvas”, NUI, and VR are one approach to improving fidelity with visual and physical mediums. The approach Jef Raskin developed with the Canon Cat is a way to improve fidelity with certain text mediums. In fact, the Internet, as Tim Berners-Lee imagined it was meant to improve the fidelity of our experience with text.
So what do we do? Well - it’s clear that we’re in the age of VR/AR now. Facebook has Oculus, Steam has their on VR system, and Microsoft is developing HoloLens. In developer tools, we’ve seen things like Chris Granger’s LightTable project as well as things like Haskell for Mac. I’ve not seen an improved text interface for word processing yet. It seems like this space may be ripe for innovation - an immersive experience that better ties together the practice of writing, researching, and reading.
1. Here’s a video that will better explain the interface.
August 16, 2016
Haskell & Stubbornness
A persistent theme for the entirety of my adult life as a programmer has been attempting to learn the Haskell programming language. Everything about the language appeals to me, from strong static typing to lambdas (my first exposure to them) to its purity. Despite these repeated attempts over the last ten years, I have been unsuccessful in becoming a proficient Haskell programmer.
So what’s kept me from Haskell enlightenment?
Stubbornness, pure stubbornness. Or at least so I’m convinced.
See, the Haskell community is rich and plentiful with very smart people. Additionally, Haskell has incorporated some ideas from mathematics. These mathematical ideas introduced words like Monad and Monoid into the vocabulary of the community. While the constructs in Haskell share the same name as their namesakes in mathematics, they are not identical.
So we have a community of very smart individuals and a set of semi-obscure mathematics concepts which mostly describe concepts in Haskell. Now, when they write books, guides, or tutorials about Haskell, they use this expanded vocabulary that is outside most programmers’ experience.
Back to me, this is a story about me. I’m stubborn. Haskell presented a unique challenge for me. It came with a vocabulary that I felt was elitist. For a long time, I counted myself in the camp of folks that jokingly told other programmers that Haskell requires a PhD in math as a prerequisite to learning it. I read all of the books (LYAH, RWH, you name it) but still, it just never stuck. The concepts were beautiful, but the details felt heavy handed with obscure math concepts.
Well the stubbornness runs both ways - because I’ve never gotten the hang of Haskell, I’ve yet to give up on learning it. My latest attempt was initiated by a new Haskell book called Haskell Programming from First Principles. Typically, I’d have ignored the book but the story of one of the authors caught my eye. She is not a professional programmer by trade and was taught (from first principles) by her co-author.
The story of the author reminded me of my stubbornness and so here we are. I’m reading this new book, learning Haskell in a new way, and hoping this time it sticks.
May 7, 2016
Stupid Git Tricks: 1 of N
To the surprise of some developers outside of Microsoft, my team uses Git for the vast majority of our code (10,000+ lines of code). Since we work on Windows, Git is built by using a bastardized subset of Cygwin to provide some of the POSIX facilities it requires. This generally works but there is some cruft and ugliness that occasionally rears its ugly head.
Recently, I performed a clean install of Windows 10 on my work laptop. Shortly thereafter, I started encountering a puzzling issue with Git when I was working from home. Any time I’d ask Git to do an operation that required hitting the wire or talking with the origin, it would inexplicably hang. Eventually, it would crash and spew some error that was largely meaningless.
After googling for a couple of hours, I found issue 493 on GitHub for the git-for-windows project. Apparently Git is poorly behaved with whatever changes occurred in the users/groups APIs when the machine is unable to resolve the domain controller. To mitigate this issue, a workaround was identified where you would install Cygwin and copy its settings for POSIX groups and passwd, working around Git trying to resolve the domain controller.
Assuming that your Cygwin installation was
C:\cygwin\ and that your Git for Windows installation is at
- Install Cygwin
- From a Cygwin Bash prompt, type in the following:
getent passwd $(id -u) > /etc/passwd
getent group $(id -G) > /etc/group
Now, if you’re off of your office network, you should be able to push/pull without issue.
Some Side Notes
- I’m on a domain joined laptop.
- I’m using Git Credential Manager to seemlessly pass my credentials to Visual Studio Team Services (formerly VSO)
April 10, 2016
Adventures in Gigabit Internet
In my never ending pursuit of novelty, I recently acquired gigabit internet. I discovered back in November of 2015 that CenturyLink would begin offering gigabit internet in my Seattle neighborhood and that it was competitively priced with service about 1/5th as fast from Comcast. I thought it over for a bit and ultimately decided that I must have this new and shiny thing so that I may lord it over family and friends.
Brief side story: I often hear people complain about Comcast and how the service is always going down or the internet connection is slow or how customer service is poor. Ignoring the data caps which are a valid complaint, those individuals that have only had Comcast in Washington don’t know what bad Comcast service is. I lived in Texas and Florida, having had Comcast in both places and there were times I wasn’t sure the internet was going to work for days at a time, not just hours. In the time I had Comcast in Seattle, I only had one issue and it affected the entire West Coast.
Back to our regularly scheduled programming… so I signed up for CenturyLink Gigabit in November with an estimated installation date of about a month later in December. Apparently they were unable to roll out fiber fast enough and were working 18 hour days just to get people hooked up.
The December installation date rolls around and I noticed that at some point, someone had strung fiber to the house (probably the day before). The technician comes out, explains what he’ll need to do, asks if I have any questions and begins his work. The installation was smooth, taking about two hours, and requiring the technician to drill a single hole through my exterior wall to run the fiber into the house. I was pleased that he drilled the hole at a downward angle from inside to outside the house and filled the hole with silicone caulking after running the fiber to maintain the moisture barrier.
The technician finishes up, sets up the modem/router combo, and confirms I have connection with his tool. I begin playing with the connection while he’s cleaning up and notice that the modem/router seems to only be able to max out at around 600 Mbps versus the gigabit (1000 Mbps) speed I signed up for. Now 600 Mbps is nothing to sniff at but it’s not 1000 Mbps. I bring it up with the technician and he notes that it can take a couple days for the connection to get setup correctly on the CenturyLink side — I don’t really buy this explanation but I let it go at that. There’s likely nothing he can do any way as I have my own suspicions as to what’s going on.
Remember when buying a new processor was the easiest way to speed up your computer? That ended around the Pentium II time frame based on my recollection. How about when the easiest way to speed up your computer was to buy more RAM? That lasted till around 2010 or so when consumer grade SSD’s became widely available at reasonable prices. So from 2010 onward, the easiest way to improve your computer’s performance was to install an SSD as the primary operating system drive. Well we’ve hit a new benchmark — gigabit internet is fast enough that the vast majority of SOHO (Small Office/Home Office) networking gear is no longer sufficient. Now we don’t buy new computers to speed up our experience, we need to buy new networking gear.
So I head to the always trustworthy Internets and begin searching for information about what router to get for CenturyLink Gigabit. I turn up two or three useful articles…
- Bypassing needless CenturyLink Wireless Router on Gigabit Fiber
- A bit internet upgrade: A real-world review of the new 1 Gigabit internet from CenturyLink
- I have CenturyLink’s 1gb fiber to the home and it’s glorious!
One of the articles (2) notes needing a new router while another simply wants a new router (1). The last article (3) seems to be able to get the elusive gigabit speeds with the default router from CenturyLink but it’s definitely a different model than the router I was given. Side note: you’ll notice that when those articles show their SpeedTest results, they show something like 930–940 Mbps and that’s actually the
theoretical realistic limit for fully capable hardware. To get actual gigabit you’d need to spend an order of magnitude more for 10 Gbps hardware.
So those articles are fun and good but what router do I buy? I don’t just want a gigabit router because that just means the local network traffic moves at gigabit. I need a router with a good WAN → LAN speed. After some digging online, I found SmallNetBuilder and discovered the Ubiquiti EdgeRouter Lite-3. It looked to be highly configurable and supported a key feature that is needed if you want to cut out the CenturyLink branded router — VLAN tagging. For whatever reason, the VLAN used by the ONT (Optical Network Terminator) is tagged 201 and so you can’t just use any old router to talk to the ONT.
Further searching on the internet and I found that the multiple individuals have had success getting the EdgeRouter Lite setup with CenturyLink so I went ahead and bought it. This particular router is complex to setup — or at least more complex than any I’d tried to setup in the past despite having used DD-WRT with some fairly complex settings. With a little searching though, I was able to adapt a config used by a forum user to get my EdgeRouter Lite setup correctly.
For your convenience: a scrubbed version of my config.boot for the EdgeRouter Lite.
This config will setup:
- eth1 as WAN
- eth0 as LAN (192.168.1.1/24)
- PPPoE connection (look for @qwest.net)
- WAN → LAN bridging
- VLAN tagging for VLAN 201
- OpenDNS as the DNS nameservers
- the correct MTU of 1492 to communicate with the CenturyLink backend
- enable DHCP server
- firewall rules to drop all incoming requests
This should be a basic setup. I haven’t spent any time trying to look into opening up firewall ports so that’s decidedly outside my expertise. It was a wonder I got the above features working and stable.
The only addition to this setup is my Apple AirPort Extreme (802.11/ac) in wireless bridge mode to serve WiFi to the house. With that setup, my MacBook Air will get around 240 Mbps and my 3rd gen ThinkPad X1 Carbon will get closer to 300 Mbps.
TL;DR: Well I can get the 930 Mbps speed I was supposed to with the Ubiquiti EdgeRouter Lite-3 if I plug in a gigabit interface directly to eth0. Keep in mind that you’re only going to see that speed if you have no real networking between the router and you — so eth0 wired directly to computer. That said, for $100 it’s one of the best routers I’ve ever owned and it’s been extremely stable with no speed drops or rebooting of the router. It just chugs along.
January 24, 2015
My wife and I have recently begun the home-buying process. After looking at our finances last fall, I realized that if we were to liquidate some of our investments, we could have a downpayment for a house. With this in mind, I began lining up the assorted “ducks” needed in buying a house, namely:
- determining a price range
- determining a down payment for that price range
- looking at our mortgage options
Around the turn of the new year, we finally got a chance to go to meet with a representative from our credit union and talk over the basics. While I was familiar with some of the info, my wife had little knowledge of the process (and to be honest, I wanted confirmation of my own knowledge).
Prior to all of this, I had occasionally perused Redfin and Zillow. After a while, I determined I actually preferred Redfin and thus began keeping active searches just so I could get an idea of what I liked in a house and what I didn’t. After we had met with the credit union, as aforementioned, I started spending more time looking at details and scrutinizing photos provided on Redfin. It was this closer scrutiny that raised to my attention a detail I hadn’t previously noticed, Redfin realtor ratings mean nothing.
What do I mean by that? Why would I say that? People love reading feedback about individuals in service industries and how they treat their customers. Of course the ratings matter. No no, you misunderstand me. The reviews mean something, the ratings mean nothing.
The Redfin Realtor Search site is the way I looked up ratings for Redfin Realtors in our area. Go ahead, take a look at the ratings for realtors. At least in my area, Seattle, there’s not a single realtor they list that doesn’t have a 4.5 out of 5 stars or better rating. That leads me to the corollary to my previous statement: When everyone is a 4.5/5, no one is a 4.5/5.
This is the same concept as participation ribbons and “honorable mention.” If your scale isn’t actually differentiated, then the scale doesn’t matter. It’s the reason your college courses were curved. It’s okay that the highest grade was a C+ and you got an F, the professor is going to curve the grade so that the person with the C+ gets an A+ and now you’ve got yourself a C. The professor wasn’t interested in the absolute value of grades people got, he knows that shit is hard. Maybe he feels he didn’t cover a topic well enough and now because everyone bombed the midterm, he’ll correct it by curving the course. Curves have an interesting side effect in which they encourage the class as a whole to gravitate toward a small range of grades. If some asshole gets an A before the curve, everyone else is hosed. That’s a story for a different time, and now I’ve gotten sidetracked.
So if every realtor is rated 4.5, how do you differentiate? Well you meet with realtors until you find one that clicks. The homebuying process is incredibly nerve-racking for your first time or at least it is for Caroline and I. You want a realtor that can keep you grounded (within your price range, how much maintenance to take on) and keep you calm. You can’t discern this from rating nor can you discern it from a review. That brings me to my next point: The written reviews matter as they’re a way to narrow down what you’re looking for.
If you’re a high energy person, you probably like being around high energy people. If you’re prone to being a little neurotic, you probably should have some people that are level-headed in your life. Same goes for a realtor. While the reviews written about a realtor help you filter, you won’t be able to really discern how good of a fit you are with them until you meet them and spend some time chatting.
A good example is our experience in interviewing realtors. We talked with two realtors, whom shall remain anonymous. Both were great and lovely individuals. They both had qualities we loved. I’m a detail oriented guy, security engineer and all. I like people that I feel are on top of their shit. They got it together, they know what’s happening. My wife likes to make an emotional connection with the people around her. She wants to feel involved and connected, and wants them to feel involved and connected. The first of the two realtors (let’s call her Realtor A) we met, is very detail oriented. She is a clear Type A personality, very results oriented but very friendly too. The second realtor we met was less of a Type A personality, but she clicked really well with my wife and I on a personal level. She seemed to have a calming effect that worked well with my wife’s anxiety and yet she didn’t seem inattentive to the details, which calmed my own anxiety.
So if the ratings don’t matter, why does Redfin post them? My presumption is that, similar to having a merit scholarship in college, Redfin-affiliated realtors must keep a certain quality of review feedback as well as throughput on home sales & purchases. It is likely that in order to meet the 4.5+ rating in the first place, you’ll likely already be a 4.5+ realtor straight out. That is to say, those realtors which can sustain such a rating and sale/purchase numbers is holistically a 4.5/5 realtor. They’re just that good.
So what do you propose Zac? Now, I cannot 100% say that Redfin only displays 4.5+ realtors, but if that’s really all they display and/or affiliate with, there’s no need to show the rating. Absolutely keep the reviews. By displaying the ratings, they create false market where consumers believe that these ratings matter except that they don’t. If a consumer trusts Redfin and even consider Redfin’s affiliated realtors, they’re already bought in. I remain unconvinced that the ratings actually influence user decisions in going with a Redfin realtor. That said, it’d be interesting to see Redfin use A/B testing for a period to see if there’s a meaningful drop in users requesting Redfin realtor services.
All in all, the point is simply this: When everyone’s a 4.5/5, no one’s a 4.5/5.
January 19, 2015
In the beginning, there was Zac. He discovered Linux and ran fifty thousand different distributions, never being satisfied. When one distribution fixed an issue, thirty new issues were created. Subsystems bickered, sound servers wept, and bus systems rended their clothing.
This went on for years and finally Zac gave up. He threw in the towel and bought a Macbook. He basked in the mostly-Unix environment provided by OS X. He occasionally tried to take Linux back but it was always a disappointment. One of the more recent forays into nix, he ventured into the BSD realm. Using FreeBSD from versions 7 to 9, he found a mostly harmonious environment. There were forays into other BSDs but these were but fleeting dalliances. Then the FreeBSD gods thought fit to introduce hellacious regressions/complications around version 9. Thus ended Zac’s adventures in nix yet again for several years.
Enter OpenBSD in 2015, stage left.
It seems every year, we hear that 20XX is the year of the Linux desktop. Inevitably, every Linux pundit from Brasil to Mongolia will extol the virtues of switching to Linux. Now don’t get me wrong, Linux does great things but it’s never worked out of the box well enough as a desktop. There’s always something to tinker with, some driver to compile, some knob to fiddle with.
I installed OpenBSD 5.6 on my old Thinkpad x201 and much to my surprise, it just worked. Better than installing Windows out of the box on this particular machine in fact. WiFi required a firmware update, but that was as simple as running
fw_update. I configured a few settings with the help of the very thorough OpenBSD documentation and it’s pretty much been cake.
Maybe 2015 is the year of the OpenBSD desktop rather than the Linux desktop.
USB Key Setup
I won’t cover how to burn an ISO to a CD. That territory has been tread since before I started using *nix. In fact, I’m not even going to cover USB key creation with OpenBSD install media. OpenBSD does a better job at that in section 4.3.4 of the FAQ.
I’ve taken to storing the various tweaks to config files in OpenBSD in my https://github.com/zacbrown/configs/tree/master/openbsd. There’s three main groups in the ‘openbsd’ folder of configs, spread below. Folders are in italics and bold.
fw_update - this is required to update the WiFi drivers. My chipset is the iwl-1000. YMMV.
- etc - everything in this directory can be copied directly into /etc/.
- xorg.conf - a little chunk of Xorg config to setup the TrackPoint
- rc.conf.local - the basic settings I use in mine, including specificying that apmd should dynamically scale the CPU, which daemons to launch, what flags to pass to PF, etc.
- BSDNow.tv has a good coverage of some of the things you’d put in rc.conf.local.
- login.conf - Settings for how much heap processes can take before the OS forcibly kills them.
The aforementioned settings are pretty key to a good experience with OpenBSD as a desktop. The
xorg.conf file is necessary for the TrackPoint middle-click button to work for scrolling. While
rc.conf.local changes aren’t required, many of the options I’ve specified in there are good suggestions for laptop configuration. The
login.conf file changes are necessary since web browsers are terrible hogs.
I don’t actually have any “elaborate tweaking” that had to be done. One open issue is getting the hardware volume buttons to control the hardware mixer rather than routing the commands through X to the application with focus.
- etc - everything in this directory can be copied directly into /etc/.
- apm - this folder contains scripts that will be run for various apmd events (suspend, standby, hibernate, shutdown)
- In my case, the
suspend script is used to cause the Xsession to lock. See .xinitrc below for what happens.
- pf.conf - firewall settings, block all inbound. Might need to allow ssh at some point.
- bin - some basic helper scripts
- wireless & wireless.cfg - wireless is a perl script that nicely connects to preconfigured wireless networks specified in wireless.cfg. You can find the original author of the script here.
- powersaver-mode - uses
sysctl to change hw.cpuspeed to lowest setting (0) to reduce CPU usage
- performance-mode - uses
sysctl to change hw.cpuspeed to highest setting (100) to maximize CPU usage
- .xinitrc - This file is read before X11 is started. Notable entries in it are the launching of
xidle which is used by the aforementioned apm scripts to trigger the Xsession to lock on suspend.
- .kshrc - This file is loaded for each new
ksh instance. Just some basic defined variables used in the terminal.
- .profile - This file is loaded for each new logon session. In order to get it to reload fully, you need to logout and back in.
The configurations above are the extra tweaks I’d have made in some form in Windows had I just installed that. Power management configurations, settings to lock the machine on suspend, firewall settings, and wireless connection settings. None of these are earth-shattering settings to get some fundamental piece working.
Now at the beginning, I made it sound like it was all completely working when I installed. That may appear misleading considering all the files I describe above but they’re largely customizations as opposed to required steps. The basic tweaks section are the only real requirements.
As far as observations go, the ThinkPad x201 gets great battery life. It’s about the same as running Windows 7 and definitely better than any Linux distribution I’ve tried to run on it. WiFi is also better in OpenBSD than it was on Linux. Signal reception is more consistent whereas it seemed to fluctuate a considerable amount on Linux.
All in all, this is a pleasant surprise and I’ll be running OpenBSD on this laptop for the foreseeable future.