New Year’s resolutions are mostly bullshit. Especially those as vague as “learn to code”. But if you know nothing about programming it’s hard to come up with something better (i.e. more specific). This post is designed to help you figure out exactly what your “learn to code” goal should be for 2013, so that you have a shot at success.
Fear is the mother of all motivators. As bad as the public education system in America might be, it would be a lot worse if there were no consequences for kids who skipped class or failed to complete assignments. In fact, many of the best performing schools like KIPP are actually much stricter than your average school.
The reason it’s so hard for adults to learn new things is that there is no authority figure punishing them if they fail. Although I’d like to believe in the romantic notion of learning as some pure pursuit of truth, the reality is that our brain is not wired to do uncomfortable things with no short-term reward. If we’re serious about accomplishing our goal, we have to create a context where we have no choice but to learn.
In school, the teacher is the authority figure. In the marketplace, however, we are held accountable by our customers. If you take money from someone in exchange for promising to do a job, you’d better do the damn job.
In my experience, the best way to learn to code is to find someone who needs a website built and offer to do it for really cheap, though not free. (In college I typically charged $100). It’s enough to take it seriously, but not so much that people will expect professional-grade work.
Here’s the thing—anyone can learn enough HTML and CSS to build a serviceable website in a week. The hard part is having the motivation to actually do it. By finding a customer, you force yourself to make and deliver on a promise.
New goal: Find a cheap freelance gig
There are a million types of programming tutorials out there, so it’s important to have a clear idea of exactly what you want to do. I’m pretty sure the two most popular goals right now are to be able to build iPhone apps and web-apps. Unless you’re dead-set on mobile and have a bit of coding experience under your belt, however, I’d advise against starting with iOS apps.
Learning Objective-C (the programming language used in iPhone apps) and dealing with the whole app-store ecosystem is a big pain in the ass. The tools for building web applications, though, are better in every single way. My suggestion is to cut your teeth on web-apps (you can even focus on mobile web-apps!), then move on to native Android or iOS apps later. Trust me, this will save you a lot of pain.
New goal: Focus on web-applications for now
Shameless plug! I’m working on a step-by-step tutorial that teaches beginners how to code their first web-application. Had to at least mention it :)
New Goal: Start with HTML and CSS
Of course, these are only my opinions. But I’m pretty sure they’re right. Too many people get started and give up because they feel like there’s no light at the end of the tunnel, nothing interesting on the horizon. By hacking fear to motivate you to actually do the hard work, and by choosing a good starting point, you’ll have a much greater chance at success than you would otherwise.
I hope this helped! Let me know in the comments if you have anything to add, and if you thought this was a helpful article I’d be honored if you tweeted about it. Thanks so much!
Caching is important. It’s one of the best tools we have to make websites faster, so any aspiring coder needs to understand it. But most of the articles you’ll find when you search for “caching” cover specific tools, or assume you’re already an experienced programmer. They don’t explain the basics of how a cachePronounced "cash" works or why it’s useful. My goal with this article is to fill that gap and provide a gentle introduction for smart beginners.
Imagine that you and I are sitting at a table with only a pen and paper (no calculators) and someone walks up and asks us “What is
930 ÷ 24?”. We’re helpful folks, so we work it out on paper and with a bit of effort figure out that the answer is
A few minutes later, a second person walks up and asks the same question: “What is
930 ÷ 24?”. You tell him the answer, then notice that I’m slowly trying to work out the problem all over again on paper.
You didn’t know it yet, but you just beat me to the punch and saved effort by using a cache. Caching saves the result of a calculation so you can conserve energy and respond faster to requests for data by not performing the same calculation multiple times.
The answer to
930 ÷ 24 never changes, so it’s possible to store the answer in a cache forever and never have to run the calculation again. But what if someone asked you for the average temperature of San Francisco over the past 7 days? Since the answer changes daily, your cached data would quickly become staleA caching jargon word that means "out of date".
The solution is to store some meta-data about whether or not the cache is fresh. Typically this is done either by setting an expiration date (“good until 1/30/13”) or by validation, which is a bit more advanced, but basically consists of asking the server whether the data in your cache is still fresh (more details on this below).
In the example above, you stored your cached data on the paper where you wrote down the answer. In the context of a web application, there are two main places where you can store cached data:
Every HTTP responseThe raw data your server sends to the user's browser over the internet. contains two main parts: the headers and the body. The headers contain metadata about the file your server is sending, and the body is the file itself. Here is an example of a typical HTTP response a server might send for a CSS file:
There are several ways we can use HTTP headers to control how browsers cache files locally. Here are the main 4:
Expires: Tue, 1 Jan 2013 04:00:25 GMT
Sometimes you know in advance how long the file will stay unchanged. In this case, you can put a line in the HTTP response header that will tell the browser when the local cached version should expire.
Instead of describing the expiration date as an absolute moment in time in the future, you could also use relative time (in seconds). For example, above is the HTTP header you would include if you wanted to cache a file for 2 minutes.
Last-modified: Tue, 1 Jan 2013 04:00:25 GMT
Another way to cache is to keep track of when files are modified. If the version of the file on the server hasn’t been modified since we last downloaded it, then the browser won’t re-download a new copy. All you have to do to enable this is to send along information about when your files are modified in the HTTP header.
The e-tag method is very similar to last-modified, except it uses a unique id for each version of the file rather than storing it as a date. If the e-tag of the local file doesn’t match the version coming from the server, the browser will download a new copy.
In order to gain full control over your implimentation of HTTP caching, you’ll need to configure your server to send the proper headers. There are many different ways to do this depending on the type of web server or framework you’re using, so I won’t go into that in this blog post. That being said, most web servers come with some HTTP caching out of the box, so you are probably already using it to some extent.
You can look at a site’s HTTP headers using a tool called curl"Client for URLs". On most Mac or Linux machines, it comes pre-installed. Windows folk need to download it.
Once you’ve got curl, you can go to your terminal and run
curl -I url, replacing “url” with the appropriate link to the resource who’s HTTP headers you want to inspect. It will return just the HTTP headers, so you can look for all the caching methods I mentioned above.
Browser caching is great for static content like CSS files and images, but sometimes you need a more sophisticated caching mechanism. For example, most big sites like Facebook and Twitter use a system called Memcached that keeps a copy of information from their database (slower) in memory (faster). That way they can spread out some of the load from their database to the faster alternative. A diagram to explain this type of caching looks like thisNot all back-end caches sit between the database and the web server, but many do. Consider this a useful simplification.:
Implimenting a back-end cache is only nececarry for dynamic web applications with substantial traffic. There are many different caching tools like Memcached, and an even greater number of ways to impliment them into your application. For the purposes of this blog post it would be overkill to dive into the details about how they all work. If you want to learn more, check out the “further reading” section below.
I hope you enjoyed this brief tour of caching! Here is a summary of some the main points:
If you’re interested in learning more about caching, here are a few of the best resources I found while researching this blog post:
Thanks to Kyle Mulka, Nathan Cahill, and James Kruth for reviewing drafts of this.
PS — If you enjoyed this post and are learning to code (or know someone who is), you might also enjoy a book I’m writing called Enough To Be Dangerous. It’s a step-by-step guide to coding your first web application. Check it out!
People in the startup community like to talk a lot about taking risks, but when entrepreneurs do, they often get lambasted. Taking risks isn’t an easy thing, by definition. And risks often don’t work out (also by definition). It’s easy to applaud risk taking when it works out, but when it doesn’t, shouldn’t we appreciate the risk taking anyway?
Steve Jobs is hailed as a visionary risk taker, and he was. As we all know, when he got back to Apple he slashed the product line to only four products. He took a risk making the iMac 3G so colorful. The iPod was a huge risk, as was the iPhone and iPad. Fortunately,all those risks worked out for Apple. But there are also some risks that Steve Jobs took with Apple that failed: see the Lisa and the PowerMac G4 cube for examples.
My point is: risks don’t always work out, and it’s easy to applaud someone when they take a risk and it pays off, but when they take a risk and it doesn’t, they shouldn’t be treated like an idiot.
This happens more often than you’d think. Reed Hastings was once considered one of the best CEOs on the planet, and the Netflix’s success was an amazing story. They took many risks - shouldering higher and higher content costs from Hollywood while still moving toward creating original programming. During their rise they even made Blockbuster go out of business. Reed Hastings was widely praised.
And since he was so visionary, he could see that streaming was the future of his business. DVD shipping was something that was lucrative for the time being, but in the future streaming would overtake it. So he decided to split apart Netflix’s DVD and streaming business. The DVD business was named Qwiskter.
People weren’t ready for such a sudden change. Netflix didn’t have a large enough streaming library to hold on its own, and people already had a queue of movies that they wanted to watch that was broken up by the split of the two companies. For customers to keep both packages, there was a significant price increase.
There was outrage, and Hastings had to walk back the whole initiative, fire a lot of the people he’d hired for Qwikster, and make a public mea culpa. The stock took a nosedive. But here’s the thing: everything Hastings said about the future of at-home movie entertainment was true. Streaming was the future, and revenues from DVD shipping was a lagging indicator. Netflix did have to move to streaming if it wanted to win in the future.
The problem wasn’t with the vision, it was with his strategy, and his tactics in implementing the strategy. Eventually, someday in the future, Netflix will have to implement this vision again, albeit in a different, more customer friendly way.
At the end of the day, Reed Hastings had a vision for the future (one which is probably correct), and he executed on it. He took a risk, a big one. And it didn’t pan out. Yet he got lambasted in the media for his mistake.
Risks don’t always work out. That’s the point of them. And that’s why not everybody takes them. If we’re going to celebrate the risk taking that pays off, we should at least appreciate the risk taking that doesn’t.
We’re taking our own risk by creating an online book to teach people how to code. It’s called Enough to be Danger.us. Check it out here.
Jeff Atwood’s Please Don’t Learn to Code sparked off quite a debate. His argument is just what it sounds like - everyone doesn’t need to know how to code. He says that coding isn’t an essential life skill, and more code is not the goal - it’s a method of problem solving. He likens it to saying that everyone should know how to be a plumber.
Others argue that we should try to teach everyone to learn to code because coding is the future. Some even say it’s literacy for the 21st century.
As you can probably tell from what we’re working on, we tend to agree that everyone should learn to code, even just a little bit. It’s a skill whose importance is only increasing. As Marc Andreesen says, “software is eating the world.”
But it’s more than just a cute saying, you can see the trend playing out over the economy. More and more industries are being integrated with, and in some cases overturned by, software. You can see the trend already in progress in some industries: people no longer take your ticket in the parking lot, computers do. And when you check out at the grocery store, you’re probably interacting with an automated system, not a person.
You can also see the trend just beginning in other sectors: The automobile industry, the marketing industry, even political campaigns. So wouldn’t it make sense that the people in those industries understand code, and understand how to code themselves?
As I mentioned in another post, learning to program doesn’t mean being a programmer. It just means understanding code, and being able to use it when it helps you do your job better. There are tons of knowledge workers who could use the knowledge of coding. Administrative assistants, marketers, journalists, and teachers, for instance. They’re never going to be professional programmers and work at Google, and that’s fine. Not all writers seek to get published.
And with the new programming paradigm of mobile and tablets, a new generation of computer users are trained to think of computers as strictly consumption devices. As we all know, computers can be so much more than that. With “software eating the world” and the newest computer users only ever exposed to the possibility of being consumers of software, we could see a future in which an even smaller percentage of the population are producers than now.
It makes sense to try to teach everyone to learn to code, because it makes sense to try to give everyone the opportunity to be a producer of software, even in a small way. When we do, most won’t program professionally, but they might do it for themselves to solve a problem they have. And what could be wrong with that?
If you want to learn to program, check out Enough to be Danger.us.
You may have heard of TDD (test driven development), but if you’re new to programming you may be curious to learn what a software test is and why it’s useful. Most of the literature about testing is targeted at experienced programmers and dives into the details without explaining basic things like why it’s useful to test your code in the first place and how a test works. This post aims to fill that gap.
Writing tests for your application lets you automatically ensure it’s working properly. For instance, if you’re writing a calculator app in Ruby that lets users add and subtract numbers, you would write a test for it that looks something like this:
describe Calculator do it "should be able to add 1+1" do Calculator.add(1, 1).should == 2 end it "should be able to subtract 1-1" do Calculator.subtract(2, 1).should == 1 end end
For now you don’t need to worry about the exact syntax of how this works, just notice that this code is basically a checklist to make sure that the calculator can properly add and subtract. If we tell the Calculator to add 1 and 1, the result should be equal to 2. If we tell the Calculator to subtract 2 and 1, the result should be equal to 1.
When you run the test, if something is broken it will tell you what you need to fix. Handy!
This should give you some intuition about how software tests work, but you’re probably wondering why this kind of thing is useful. It would be easier to just write the Calculator code and use it to see if it works as expected. We can skip all that tedious test-writing stuff, right?
For our contrived calculator example, it’s perfectly fine to leave your code untested. But imagine a Super-Duper Calculator with thousands of functions. Every time you edit the source code there’s a possibility that something can break. It would take forever to manually check all 1,000 functions. So you start to fear making changes to the codebase. Features don’t get shipped. Customers are unhappy. Everyone is sad. :(
If you had good test coverage of your Super-Duper Calculator, you could have run a test with one command that would have checked all 1,000 functions in mere seconds. You’d be able to write new features with confidence.
Tests sound pretty good now, don’t they? But there’s more! There are different types of tests.
The example I showed you was a Unit test, but there are other types of tests you can use. The main three are Unit, Integration, and Acceptance tests.
A unit test measures whether isolated chunks of code in your program (like the “add” function) work properly. It’s the lowest-level type of test.
These come in handy when you need to test higher-level functionality, like when multiple parts of the program need to work together, access external resources over the internet or from a database, etc.
Acceptance tests focus on the end result that the user sees. For example, an acceptance test would simulate a user filling out a signup form to make sure that the next page shows them their profile correctly. It treats the code more as a black box. Sometimes acceptance tests are referred to as “functional” tests.
For more details, check out this Stack Overflow answer.
Here’s the main points you can take home:
If you enjoyed this, you might be interested to know we’re writing a book that will teach you how to build simple web-applications for yourself. It’s called Enough to be Dangerous, and you can check it out at enoughtobedanger.us.
It’s hard to get people to do anything on the internet. But somehow our landing page convinced nearly 3 out of 4 visitors to click our call to action button. A third of those people would go on to give us their email address, leading to an ultimate completion rate of ~25%.
These numbers beat all my previous projects, so I thought it might be cool to do a bit of a deep dive to share four of the main lessons I learned from the experience. To see it firsthand, head on over to enoughtobedanger.us!
Chart courtesy of segment.io
When people land on a web page, they immediately (and subconsciously) evaluate its design for credibility. Our design is inspired by the typography from old title pages of books. We wanted to strongly reinforce our positioning that this is a book, and that it’s a fun intellectual exercise for curious minds.
We devoted two out of the three days we spent building the preview site to words. Design grabs attention but only strong words can hold it. For instance, we chose to name our main call to action “Preview the Book” because, we wanted to reinforce the message that this is a book, and it is more specific and interesting than a simple “learn more”. It also primes people to read something, which is powerful in and of itself.
Each step should only contain enough information to convince people to move forward. We were tempted to try and put more information on our homepage (the more that’s on there the more likely people are to click through, right?), but ultimately decided against it. It’s better to send a clear, simple message and tell people what to do if they want to learn more.
When you land on our homepage we don’t ask you to sign up for an account right away. You need to click a button, read a bunch of stuff, then at the bottom we finally ask for your email. We decided to bury the call to action based on Kevin Hale’s theory that user acquisition is like dating. You have to make a really solid first impression to get another date, and you can’t start off dinner on Thursday by asking for drinks on Friday. The person on the other end doesn’t know if it’s worth it yet!
These are just a few of the lessons we learned while making our preview site for our book. Of course, none of it may apply to what you’re working on, so take it with a grain of salt! I’d love to hear from anyone who has comments on what they thought worked well or didn’t work! My email is email@example.com. If I get a few responses we can start a branch about it or something!
I’ve been on the path to starting a startup for a few years now. I’m far from an expert, but I do know a bit about how the very early stages should go. I know what it looks like when you have a good founding team, and when you don’t. I know how easily you can, over time, go from working on something you care about to working on something you don’t. And I know what it’s like to be working on a product for months and never ship anything.
I haven’t gotten to raising money, or to hitting product-market fit, or to scaling. I can’t speak to the problems of those stages. But I can speak to the problems of the very early stages - I know those very well.
Simply put, the early stages are hard (every stage is hard in its own way). If you haven’t been through it before, figuring out what you’re doing is a non-trivial problem. There are so many minefields you aren’t even aware of that can blow up in your face before you ever really get started. And getting past the early stages is what separates the “wantrapreneurs” from the entrepreneurs. Most people with startup dreams have to go through this stage.
The number one mistake that people make: it’s easy to dream up an idea and talk about it, but it’s hard to actually make a product, make it good, and get it in people’s hands.
And that’s where the rubber meets the road. All the strategizing you’ve been doing beforehand is probably a waste of time. If you have a rough vision of where you’re going then that’s enough to start. When you’ve been “strategizing” mostly you’ve been procrastinating. Except the procrastination feels productive, because it feels like you’re working on your startup. But you aren’t. You’re day-dreaming.
After you actually make a product and get it into people’s hands, then the hard part starts. You’ll find that mostly people won’t care. Your friends will listen to your pitch, and they’ll try it out because they don’t want to hurt your feelings. They might even use it for a few days. But everyone’s busy, and people can’t do you a favor everyday by using your product. If it doesn’t provide people value, eventually they’ll forget about it, and they’ll leave.
So then you have to regroup, figure out something else to try, and actually make it. Again: don’t spend too much time “strategizing”. One day is more than enough - building is what matters. After you’re actually making something that people want, then it will be worth it to strategize. (Funny thing is, by the time you get to that point, all the strategizing you were doing before will be irrelevant.)
How do you know that you’re making something people want? To paraphrase the Supreme Court’s opinion on pornography: you’ll know it when you see it. If you’re not making something people want, when you pitch people they say “Yeah, that could be kinda cool.” If you are making something they want, they say, “Awesome! I want that now!” It’s the difference between you having to convince people to try your product and people coming to you and asking for it.
Mind you, this is all during the early stage. Reaching product market fit, raising money, and scaling has it’s own set of problems (or so I hear).
Now I’m at the point where I’m working on something that people want; my cofounder Nathan and I just have to deliver. Once we do, maybe I’ll get a chance to write about the next stage.
Want to see what we’re making? Check out Enough to be Danger.us.
You can learn to program without becoming a programmer. I repeat: You can learn to program without becoming a programmer. Why is this an important distinction? Because too many people approach programming thinking they have to become a programmer like their technical friends.
Don’t worry, you don’t. Don’t be intimidated by your friend who has been programming since he was nine years old. You’re probably never going to be as good as him, and that’s OK, because you don’t have to be. Even if you learn to program a little bit, it will greatly improve your life.
Programming is a skill that can help you with a vast array of tasks. If you have an office job I’m sure there are many instances in which being able to program would be a great asset. It would help you do your job better, or allow you to complete your tasks faster. Do you find yourself in front of an Excel spreadsheet? Man, it would be pretty helpful to be able to program. Do you have to do data collection? You know, it would sure be awesome if you could automate that. Anytime you find yourself doing the same task over and over again, that’s an opportunity for programming to improve your life.
And if you’re a non-technical person working in the technology industry, then you should definitely be able to program, even just a little bit. You don’t have to to be as good as the engineers, but you should know a bit about what they do, how web applications work, and how it relates to your job. In most cases the product you’re building is a piece of technology, so why wouldn’t you want to better understand it? In a restaurant, even though the maitre d doesn’t touch the food, they still know the menu.
Or what if you’re an entrepreneur, and you have a great idea that you think can change the world, but you can’t build it? Then you’re in a bit of a conundrum. You can try to find someone to build it for you, but if you ask around you’ll quickly find out that’s a losing proposition. Your best bet is to build it yourself.
In these situations, you wouldn’t consider yourself an actual programmer, but your ability to program will help you do your job better, or make your life easier, or bring your idea into reality. And developing the skills to do that isn’t nearly as hard as you think.
If you’re interested in learning how, check out Enough to be Danger.Us.
You might think of programming as this mysterious black box. Thoughts come in to a programmers head, they write some gibberish, and then poof! a working application comes out. But it doesn’t work quite like that. Programming isn’t some mysterious dark art. It’s a way to manipulate computer systems to do what you want. Even if you haven’t written a line of code, you’ve probably already programmed.
How is this? You may not have programmed computer systems, but you’ve programmed non-computer systems. When you put together an IKEA dresser, you’ve programmed a non-computer system. Actually, your brain is a non-computer system, and you program that all the time without even knowing it.
You don’t think of them as systems because they don’t take a lot of math or logic to program, and you don’t use code to program them. But they are systems. A system is a set of interdependent components that form an integrated whole. They take inputs and give outputs. IKEA dressers have a lot of components that all depend on one another, and you have to put them together a certain way. When you move the dresser, it’s supposed to slide on the floor. If you don’t put the pieces together the right way, it won’t slide. In other words, the program will be broken.
You’ve already programmed non-computer systems, but how does that translate to actual programming? Instead of putting physical components together, you put components of code together. When you talk to a computer, you have to use a language it understands. And when you use that language, you have to be very exact and literal. You can’t be subtle, you have to explain every bit, step-by-step. And if you’re not clear, if you’re not exact, the program won’t work.
Let’s use the example of you programming your own brain. Say you’re in a meeting, and you have an idea you want to communicate with your co-worker. You have to listen to him, and when it’s your turn to speak, communicate your idea. You know how to do that without even thinking about it.
But what if you had to tell a computer how to do that? You’d have to be very literal, and break down every step. Assuming the computer spoke English (instead of Ruby, or Java, or SQL) how would you tell it how to communicate the idea? It might go something like this:
Great! Now we have a simple program for your brain to run. If it were written in Ruby (using Sinatra), it might look something like this:
myopinion = "I think we should go with Sinatra over Rails. It's still powerful and it's easier for beginners to pick up." get "/what-do-you-think" do say myopinion end
So what does this code say? When you get a request asking for information (a get request), do something. The particular request is:
"/what-do-you-think". The something is:
say my opinion. What
myopinion is, is declared above. So, when your brain gets this particular request for information, it knows what to do.
Unknowingly, you’ve already programmed a non-computer system (congrats!), so don’t be afraid of it. The next step is understanding how to do the same thing with computers.
If you want to learn how, check out Enough to be Danger.us. It’s an online book that’s a step-by-step guide to coding a simple web application.
When cars first came out they were called “horseless carriages”. We refer to twitter and facebook as “social media”, and those little computers that live in our pockets are called “smart phones”.
They’re all manifestations of what Chip & Dan Heath label the “Anchor & Twist” strategy, in which new products introduce themselves using the terminology of their forebears. The goal is to help people understand how to use unfamiliar objects - to slip new products into old routines.
It works incredibly well, as any marketer will affirm. But there is one little hitch: cars are not really carriages at all, Facebook is nothing like the New York Times, and the iPhone differs so completely from those Motorola RAZRs we used to sport that the word “phone” borders on insulting.
There comes a time in the lifecycle of any product category that it must grow up and become it’s own man, so to speak. That time is coming for a category I care deeply for - “electronic books”.
You see, I’ve been obsessing about the future of books since I was a sophomore in college. It always seemed ridiculous to me that with all the advantages computers and the internet give us, the best we could do was to make glorified word documents and sell them for $9 to be read on devices intended to mimic paper. I dreamed of the day when a class of books could behave more like apps.
In fact, my first startup idea was to make a platform that would enable authors to create such books. Unfortunately, authors didn’t want it. So I realized that some people with a technology background would have to start creating books differently. The creators would lead, and platforms would follow.
Fast forward 5 years and here I am, writing a book with my best friend that is going to help people learn to code web-apps. Is it going to look anything like a book you’ve ever seen before? Hell no.
I can’t say much (yet) about the sorts of fun we’re going to have dancing on the line between book and app, but I can tell you a bit about the basics. We’re publishing this book ourselves on the internet. It’s using a modified version of ruby on rails. Users will have accounts, and access to updates as we push them (which will be quite often). We’re going to start with a small group of readers and iterate from there using data. There will be a core flow of content with interactive examples, quizzes, videos, etc.
If you’re a web-developer, this all sounds totally normal and mundane. Publishers, on the other hand, will be mortified and baffled. Me? I’m just excited to see what happens and share what we learn as we’re making it.
Thanks for following along!
Curious? Check out what we’re making here!
Hi! We’re Sandbox Labs, a small company in San Francisco founded by Bryce Colquitt and Nathan Bashaw. We build cool stuff for the web.
Our first project is called Enough to be Danger.us. It’s an online book that’s a step-by-step guide to coding a simple web application. It’s for people who don’t want to be full time programmers: they want to know enough to get their idea built. We’ve taught ourselves to code, we’ve taught others, and we’ve discovered the best way to go from no knowledge to being able to program a working web-app. If that sounds like something you’re into, this book is for you.
This is the blog for the project. We’ll write about learning to code, but we’ll also write about startups, technology, entrepreneurship, and learning in general. Expect posts from us here every day, Monday thru Friday.
Check out Enough to be Danger.us here, and sign up for updates on the book. Along with updates, we’ll also share a weekly digest of our best blog posts. We’re looking forward to hearing from you!