Another new year, another Codemash in the books. Thanks to my employer, Applied Information Sciences, this was the fourth Codemash I’ve been able to attend.
Codemash is a fantastic family-friendly developer conference that brings in speakers and attendees from all over the world, with a wonderfully diverse range of topics and expertise. If there’s a hot new language or technology, it’ll be discussed here, in the sessions or in the hallways. Despite being held in Sandusky, OH in the middle of winter, it’s a blast for the whole family since it’s hosted at the Kalahari indoor waterpark resort.
The pre-compilers are where you’ll actually learn the most – these are half or whole day sessions preceding the main conference, allowing you to deep-dive into a specific subject, and get hands-on experience with expert guidance. Bring a laptop with enough power to play – you’ll be spinning up new VMs, IDEs, and hardware. Also, bring plenty of hand sanitizer, Emergen-C, and antacids – because the Crud seems to spread very fast at these winter indoor conferences.
There are plenty of opportunities to expand your professional network – from the experts teaching the sessions, to fellow session attendees, to your dining table companions, to board-game players. I met a lot of great people doing interesting things from all over the Midwest and even my neck of the woods – I even met local developers who had preceded me at my local office! Some were even familiar with AIS and their reputation for doing great work in the .NET/Azure space.
Here I’ll break down the most memorable sessions that I attended, and the major takeaways from each.
Build a Natural Language Slack Bot for your Dev Team with Michael Perry
This one was a lot of fun: in this session, we linked together several different technologies to make a Slack bot that could interpret commands and then perform devops actions in VSTS.
- Create a Slack App (with bots), that accept & pass messages to the…
- Azure Bot Framework, which provides a .NET framework & scaffolding to interpret messages (from Slack and other platforms) by integrating with…
- LUIS, which parses, interprets, and tokenizes natural language text into a structure that can be coded against, and then execute devops tasks using…
- VSTS API, where you can list or execute builds, create deployments, manage the repo, etc.
- The Azure Bot Framework provides simplified connectivity to Slack, Alexa, Cortana, Skype, and other platforms. Simply check the appropriate box and provide the other platform with the related URL, and the framework transforms it for you into a common data model.
- LUIS is Microsoft’s counterpart to Amazon Alexa. LUIS is more generalized, whereas Alexa seems to be more fine-tuned for commands.
Build Your First Voice-Enabled Experience with Alexa with Jeff Blankenburg
Whereas the previous session was Microsoft/Azure, this session was the Amazon/AWS counterpart. I learned how to craft an Alexa skill with a AWS Lambda backend.
- Alexa skill creation is simply done with json configuration files – skill metadata (skill.json) as well as the voice interactions (InteractionModel.json) are defined this way.
- Alexa interprets, parses, and tokenizes speech in a command-oriented way:
- Invocation: open “State Facts”
- Utterance: “tell me about Ohio”
- Intent: “tell me about” maps to an Intent (function)
- Slot: “Ohio” maps to a variable (parameter)
- development, testing, and deployment are easily done using the Alexa Skills Kit (ASK) CLI\
3D Modeling for Makers and Game Developers! with Robert Palmer
I haven’t done any 3D modeling or animation since college, and it was in Lightwave – so while I was able to reapply the core concepts, transitioning to Blender was still difficult. There are so many different settings and commands to master in 3D modeling, and each application has its own steep learning curve. But with so many practical uses for 3D models, from 3D printing to game development, it’s an incredibly useful skill to have.
The biggest takeaways from this session were: A) nothing is more important than hands-on experience and playing around and B) having the right hardware is important – you will definitely want a full-size keyboard, large high-resolution monitor, and 3D mouse.
Building Holographic & VR Experiences with the Mixed Reality Toolkit for Unity with Nick Landry
A good introduction to VR – explained the differences between AR (Altered Reality), MR (Mixed Reality), and VR (Virtual Reality). Microsoft’s focus is currently MR, with affordable headsets in the $250-$400 range.
Development is done in Unity, using the Microsoft Mixed Reality Toolkit library. It provides an API to add Windows UI flavoring to Unity 3D applications. Unity is another great cross-platform framework that can use C# code. I wonder if there’s any merit to trying to write a normal non-3D app in Unity?
Press Start: Game Development for the Uninitiated with Shawn Rakowski
I snuck into this session late, as the previous one I went to was a bust – and that’s unfortunate, because I enjoyed what bit of it I heard. The speaker gave a lot of practical tips for beginning game development. The most difficult aspect of game creation for most developers is the artistic side. Shawn encouraged neophyte game devs to “embrace programmer art” – i.e. don’t let imperfect art stop you. He gave a lot of good references to free resources for sound effects, music, and graphical assets.
Kubernetes Development on Azure made easy with Helm and Draft with Raghavan Srinivas
Containerization is the hottest thing in my neck of the woods, and many of my coworkers are using them – and now Microsoft is jumping in head-first. The speaker demo’d Azure Container Instances, and how to use Helm and Draft to manage Kubernetes. ACI supports Docker, DC/OS, and Kubernetes out of the box, and CHEF, Ansible, etc. via a custom IaaS solution, and Azure already has services like Functions, Service Fabric, and App Services. The speaker also provided source code for some hands-on labs.
Building an Artificial Pancreas with Timothy Mecklem
Out of all the sessions at codemash, this was the most personal. My wife is an adult-onset Type 1 Diabetic, and managing her blood-glucose levels is a difficult and tiring neverending chore. Tim Mecklem is in a similar situation, and is contributing to an open-source closed-loop insulin system.
Today’s insulin pump technologies only roughly approximate the pancreas’ function. Basal (constant) delivery rates are essentially fixed (set by an endocrinologist), and bolus (as-needed) deliveries are determined based on the raw carbs and appended over a short period of time. The danger with diabetics is that the basal/bolus calculations can be off due to a variety of factors, leading to dangerous hypoglycemic and hyperglycemic conditions. A normally functioning pancreas determines everything on the fly, and releases insulin in a much more fine-grained manner, and keeps the BGL in the sweet spot. A closed-loop system uses software algorithms and Continuous Glucose Monitors to fine-tune the delivery of the proper amount of insulin such that boluses are unnecessary, and basal rates can be automatically adjusted on the fly.
The closed-loop system uses Elixir and (the speaker’s own) NERVES framework to handle the system-level software, where the actual loop calculations are done with OpenAPS (which has iOS, Android, and Pi implementations). The Nightscout software is used to monitor the CGM and share that information online.
Unfortunately, the only thing that makes these self-made closed-loop systems possible is a security flaw – one that also poses a huge risk. The closed-loop software is only able to communicate with (specific, older) insulin pumps because their communications were unencrypted. Newer models have encrypted communications, which is ultimately necessary, but makes third-party interaction impossible. The only hope now is that the medical device manufacturers would provide some sort of official secure API, but given the FDA is involved that’s still highly unlikely. So it’s a waiting game to see when the manufacturers will develop their own closed-loop system and get FDA approval.
Lessons Learned from Making Resilient Apps with Azure Mobile App Services with Matthew Soucoup
This session covered practical designs for resilient (robust) applications that consume and create shared data. Today’s apps are creating and consuming a lot of data, and with that comes a lot of possible points of failure which all need to be accounted for in the design. Possible points of failure: intermittent (AKA mobile) data connectivity, limited data bandwidth, and data conflicts. All I can say is that I wish I had learned this 4 years ago 😀
- Keep client-side schema flat
- Structure data updates (to the server) as a list of operations.
- optimize schema for bandwidth, MVVM (app) data structure
- Data synchronization strategy: push changes to server, resolve any conflicts, push merged changes to server, pull new data
- Use Azure Mobile App Services (ZUMO) cross-platform SDKs for easy handling of data synchronization, push notifications, oauth, etc.
Other lessons learned:
- no need to push data immediately (except shared data)
- only do large pushes on wifi
- only store data offline when it makes sense
- incremental sync is your friend
- store pending data operations via sync context
- have different exceptions/messages for offline vs online pushes
- for conflicts, don’t ask the user if their data should win, figure it out in code
- use push notifications to trigger a client data pull, to keep clients more in sync which will lessen conflicts
Hey, You Got Your TDD in my SQL DB! with Jeff McKenzie
The problem with SQL (and datastores in general) for TDD is that it’s a separate component from your application server, and that makes it difficult to write unit tests for. You could write automated tests in say, C#, but those are actually systems-integration tests that hop between the server and the database – these are not pure unit tests. You could then write stored procedures as unit tests, but there’s a lot of setup and boilerplate involved which would far outweigh the test code, and they would be kind of loose and not formally structured which would make it hard to introduce change.
That’s where the tSQLt MSSQL unit testing framework comes in.
tSQLt provides the test harness and utility sprocs for easy development of proper unit tests – atomic, isolated, and repeatable. The tests themselves are sprocs written in SQL, so no new skills are required. tSQLt also provides an easy way to make mock tables – an important aspect of TDD.
The tSQLt TDD process is familiar:
- Create a test data sproc that acts against the real tables
- Create table mocks using the tSQLt.FakeTable sproc
- this copies table structure w/o constraints, so you’ll have to explicitly declare those in the tests (which is a good thing)
- Insert test data into the mocks
- Create expected data as a table
- Execute the sproc under test
- Execute tSQLt.AssertEqualsTable
That’s it! And now we have our SQL code under unit tests.
One caveat: it’s unlikely that these tests can run in parallel w/o conflict – so they’ll have to be executed sequentially. Take care when writing these tests that it doesn’t take too long to execute.
It’s always eye-opening to go to these conferences because you begin to realize how much stuff you don’t know about just because you don’t encounter it in your regular day-to-day routine. I learned a lot of new things: much of it practical, that I can use in my job today; but also some theoretical or philosophical, that I will have to think on for awhile in order to incorporate. What I do know is that it’s a great time to be a developer, and that I can’t wait to get a chance to try out what I’ve learned.
Leave a Reply