Status

I miss blogging. I’ve done an absolutely terrible job this year of balancing programming and writing. Maybe I’ll shoot for one blog post per week in 2018.

Status

Did anyone at Apple even try to use Control Center on the iPhone X while carrying a coffee, or a briefcase, or, I don’t know, a baby? I think I’m gonna have to yell about this until they change it.

Status

How do I add multiple volume changes to an AVMutableAudioMixInputParameters instance? Using setVolume:atTime twice in a row is causing the first instruction to be ignored.

Control Center on the iPhone X

I ended up enabling AssistiveTouch, and under Custom Actions set Single Tap to Control Center. I placed the virtual button at the very bottom of the screen and set it so that a triple-click of the side button activates/deactivates AssistiveTouch. It’s not an ideal solution, but I need one-handed access to Control Center and this does the trick.

Staying Focused with an App Mission Statement

One important part of marketing an app is developing an elevator pitch (for more info on that, see Aleen’s great post at App Launch Map). An elevator pitch helps you tell others what your app does and why it should exist without going into too much detail about its entire feature set.

A mission statement (also called a vision statement/statement of purpose) is slightly different. It can also be used for external marketing; however, it’s primary purpose is to provide internal guidance. A company’s mission statement should ideally be consulted before making any product decisions, codifying any policies, or beginning any strategic initiatives. It describes the company’s “core” and helps prevent a loss of focus.

I think we can all think of at least one app or tech company that seems to have lost its focus lately (👋🏻 Dropbox). iTunes used to be about music. VSCO used to be a great photo editor. Everything Facebook owns now has Stories inside of Stories. Indeed, feature creep and a general misunderstanding of user wants/needs has ruined many a good app/service.

That’s why I decided to come up with a mission statement for my app. As I’ve been working to improve Snapthread (yes, I decided to make the “T” lowercase; it’ll be reflected in the next update), I’ve found myself getting lots of ideas for new features. I want to make sure I don’t stray away from the app’s true mission.

So here goes: Snapthread’s mission is to provide the fastest, most intuitive way for people to merge Live Photos and videos for the purpose of compiling and sharing their memories.

I like that this statement has a human component. If I’m going to be returning to this over and over, I want to be reminded that my primary goal is to improve people’s lives (if only in a small way). It also brings accessibility to mind. From this, you can see that my goals are to be fast, intuitive, and to focus on video merging.

How is this useful in practice? Like this: if ever I get an idea for a feature that seems cool, but would greatly increase video export times, I’ll toss the idea because my goal is to be fast. If I ever find myself adding a lot of complexity to the interface, I’ll have to take a step back and ask myself, “Does this slow people down? Does it make the app less intuitive? How does this affect the user’s workflow?” Another example: I’m planning to make Snapthread a universal app. I’ll probably do a lot with drag and drop, because dragging and dropping things on an iPad is both fast and intuitive.

My mission statement also reminds me of how my app is different from others, lest I be tempted to copy them. For instance, Clips also lets users merge videos. However, it doesn’t support Live Photos and is focused more on all of the fun effects that you can add to your movies. It also allows project saving, which adds a data persistence layer and a lot of added complexity. Snapthread doesn’t save anything, because it’s meant for quick creations without a lot of “tinkering.”

An app mission statement doesn’t have to be super formal. It doesn’t even have to be a statement…it could just be a few bullet points (I’d say no more than five). For me, it’s just one more thing to help me focus, especially when I’m trying to make to-do lists and wondering which feature I should tackle next!

Link

Interview: App Camp Fireside Chats

Interview: App Camp Fireside Chats

App Camp for Girls is interviewing one member of the Apple community each day for the duration of its current Indiegogo campaign. They were kind enough to ask me to participate, so you’ll find my entry above!

I admit I haven’t donated to the campaign yet, but plan to: I just need to decide on a rewards tier. As I’ve said in the past, I would have loved App Camp as a young girl. I hope you’ll donate too! I also want to encourage everyone to check out the other fireside chat interviews. They’ve been really fun and encouraging to read.

SnapThread 1.1 with Live Photo Support

I have a confession to make: I released SnapThread too early. I thought it was a good MVP (minimum viable product), but I was wrong. It was a little too buggy, and didn’t have a real “killer” feature.

The good news is, SnapThread 1.1 is now available, and it’s what I should have waited to release in the first place. I’m hoping that with the addition of Live Photo support, SnapThread can now be people’s go-to app for quick, easy portrait video compilations.

New features:

  • Live Photo support! SnapThread will strip the 3 second videos from your Live Photos and allows you to stitch them together.
  • The app now presents the native iOS share sheet upon successful export.
  • New duration limits. Filter your photo library by videos under 10, 15, 30, or 60 seconds.

Improvements/Bug fixes:

  • Faster exporting, and fixed a bug where the exporting would fail when trying to merge over 16 videos at once.
  • Better progress reporting. Downloading lots of videos from iCloud can take awhile, and now the app more accurately reflects the download’s current progress.
  • Smart aspect ratio: since Live Photos are 3:4 and regular videos are 9:16, SnapThread chooses the final video’s aspect ratio based on what you have more of. So if you have 45 Live Photos and 1 video you want to merge, the final video will be 3:4. Likewise, if you have lots of widescreen videos and only a few Live Photos, the final video will be 9:16. (you should see the tangled if-else statements that determine the video clips’ scaling and translation values…it’s terrifying).

Since “the proof of the pudding is in the eating,” here’s a video of Charlie’s day at the pumpkin patch that I stitched together from Live Photos (and one video):

Future plans/ideas for the app:

  • Photo library album picker (so you can find clips more easily)
  • Scaling improvements and fixes (I’m sure there’s some edge cases I missed!!
  • Ability to mute individual clips?
  • Ability to add a very simple title to beginning of video?
  • New localizations
  • Accessibility improvements
  • iPad version??

SnapThread Now Available

SnapThread icon

My new app, SnapThread, is now available on the App Store! SnapThread is a simple, no-frills utility for merging portrait videos from apps like Snapchat and Instagram Stories.

I am hoping to add support for Live Photos in the next version. In the future, I may also add the ability to include a title for a few seconds at the beginning of the video, or add a single background music track. However, I don’t want to complicate the app too much, so those features aren’t guaranteed to make the cut.

Let me know if you run into any trouble (errors and such) using the app, and if you like it, please consider leaving a rating or review. Thanks!

Please, Don’t Write About My App

Please, don’t write about my app. It’s not that good. In fact, it probably crashes sometimes. Also, I don’t really know what I’m doing.

Please, don’t tell your friends about my app. They probably won’t like it. I mean, the art assets aren’t good enough. It’s too simple. And it only really appeals to a tiny niche anyway.

Thus goes my inner monologue every time I prepare to ship an app. It’s not because I’m humble: trust me, I’m not. It’s just…fear of failure, I guess?

For indie developers, marketing is especially important. You gotta get the word out about your stuff. You gotta build your audience, refine your #brand, hack all dat growth, and so on and so forth. It feels gross. It isn’t, though—at least not for the most part. I struggle with it though, as I’m sure many of you do as well.

Look, my app isn’t special. It’s not Apple Design Award material. Does that mean it shouldn’t exist? No, it doesn’t mean that at all. I created something that’s useful to me, and now I’m going to share it with others. If they don’t find it useful, that’s fine. But if I want to give it its best chance at success, it’s still my job to tell its story.

But if you do…

Look, if you’re going to write about my app, say this: it’s a simple utility for merging short, portrait videos. It’s called SnapThread. It’s currently waiting for review.

It’s for parents who have a bajillion Snapchat videos of their kids with Marilyn Monroe hair or with a hot dog dancing on their head or what-have-you and all they want to do is create a sweet supercut of them all. No fancy filters or overlays or stickers or cropping. No dumb letter-boxing that forces it into landscape. Just stitch ’em all together and get on with your day.

It’s for travelers who have 30 Instagram Stories videos from their trip to Disney World spread over several days, and want to mash them all into one movie.

It was created by a mom who wanted to visualize how her son has grown.

SnapThread does what it says on the tin: it let’s you select portrait videos from your photo library that are 15 seconds or less, re-arrange them to your liking, merge them together, and save them to your photo library. Sometimes it takes a long time. Sometimes the videos have to be downloaded from iCloud. Sometimes their rotation has to be fixed before the merge can finish.

This isn’t an app for your home screen. This is an app you throw in your “Photo/Video Editing” folder and use once in awhile when you need it. It’s like “Clips,” but simpler.

SnapThread will be out soon. In the meantime, you can check out this page about it (the App Store link doesn’t work yet obviously).

Tell your friends! Or don’t, maybe. I don’t know.

Too Many AVPlayers?

Wow, I can’t believe it’s nearly September! For me that means I’m 1) popping allergy pills like a maniac because UGH RAGWEED, 2) getting really excited for the September Apple event, and 3) scrambling to finish up a random side-project app before iOS 11 hits the mainstream.

A couple nights ago I ran into a strange bug with my app, which uses AVFoundation to merge videos. Sometimes I would be able to export the final video with AVAssetExportSession and save it to my photo library, and sometimes it would randomly fail with the following error:

AVFoundationErrorDomain Code=-11839 "Cannot Decode" and NSLocalizedFailureReason=The decoder required for this media is busy., NSLocalizedRecoverySuggestion=Stop any other actions that decode media and try again., NSLocalizedDescription=Cannot Decode

In my app, every time a new video clip is added to the list of videos to be merged, I re-generate a preview of the final merged video. After merging the video, I set up an AVPlayer with the AVComposition like so:

I always made sure to set the AVPlayer to nil before re-generating the preview, so I couldn’t figure out why there would be any other “actions that decode media.” A trip to Stack Overflow revealed a possible platform limitation on the number of video “render pipelines” shared between apps on the device. It turns out that setting the AVPlayer to nil does not free up a playback pipeline and that it is actually the association of a playerItem with a player that creates the pipeline in the first place. Since developers don’t seem to have any control over when these resources are released, I knew I’d have to figure out another solution.

In the end, I decided to initialize the view controller’s AVPlayer right off the bat with its playerItem set to nil. Then I changed my setup function like so:

Replacing the player’s playerItem instead of initializing it with a new playerItem each time (even though the player was previously set to nil), seems to prevent that weird “cannot decode” error (so far, at least). I’d like to know more about this error and why exactly it occurs, just out of nerdy curiosity.

Anyway, I hope this helps somebody out!

Data Persistence Dilemma

Sometimes, in order to solve a problem, I have to think through it out loud (or in this case, in writing).

Here’s the sitch: I’m making an app for knitters and crocheters. They need to be able to manage projects (i.e. “Baby blanket,” “Socks for mom,” etc.). In addition to a bunch of metadata about each project, users should be able to add photos and designate one or more images or PDF files as the project’s pattern. A PDF or image of the pattern isn’t required, but including one will allow users to enter a split view where they can view the pattern and operate a row counter at the same time.

A project shouldn’t necessarily “own” its pattern. In other words, a pattern can have multiple projects associated with it (say you want to make the same baby blanket for multiple babies), so as to avoid the needless duplication of the pattern file. A pattern can exist without a project and a project can exist without a pattern, but when linked, the project becomes the child of the pattern.

My user base includes people who may not always have an internet connection. Therefore, all data needs to be stored locally. However, those who do have an internet connection are going to want iCloud sync between devices.

I like Core Data. If I were to set this up in Core Data, without any consideration of iCloud syncing, I’d create Project and Pattern entities, store images and PDFs in the file system, and call it a day.

iCloud syncing is where things get murky for me. Core Data + iCloud is deprecated, and I don’t want to use it. Not only that, I don’t know what to do with the PDFs and images. Storing them as BLOBs in Core Data seems like a bad idea. I understand how to save them to the file system but don’t understand how to sync them via iCloud and also have a reference to them in Core Data. Do I use iCloud Document Storage for them? Do I zip them up somehow (NSFileWrapper??) and use UIDocument? How do I store a reference to them in Core Data (just the file name of the UIDocument, since the file URL is variable?). If users will be adding photos and PDFs at different times, do I use one UIDocument subclass for photos and one for PDFs or do I use a single document and update it with the added information? You can tell I obviously have no idea how this works, and a multitude of Google searches has yet to clear it up for me.

As for the rest of the information in Core Data, I’m thinking of trying to sync it  with CloudKit using something like the open source version of Ensembles or Seam3.

I guess I’m not sure if I’m on the right track and would welcome any advice/feedback. I’d really like to stay away from non-Apple services (like Realm) for the time being. Comments are open!

Status

Indies: when you decide to run with a new app idea, what sort of prep do you do before you ever fire up Xcode? Prototype? Design sketches?

WWDC 2017: Designing for Everyone

If I had to guess at an unofficial theme for this year’s WWDC, it would be “Designing for Everyone.” In addition to being the name of an actual session (806: Design for Everyone), the phrase captures the main thrust of what many of the other sessions were about as well. More than one presenter mentioned that while accessibility wasn’t a primary focus when re-designing some of Apple’s first party apps for iOS 10, it became a priority for iOS 11. One presenter (I can’t remember who) admitted that her team still had plenty of work to do on the accessibility front.

The session 802: Essential Design Principles reminded me of the best book I read in grad school: Donald Norman’s The Design of Everyday Things. Apple Design Evangelism Manager Mike Stern outlined many of the same principles that Norman discussed in his book—things like natural mapping and affordances.

In Chapter 2 of The Design of Everyday Things, there’s a little box with questions that designers should ask about their design:

How easily can one:

  • determine the function of the device [or in our case, an app or feature]?
  • tell what actions are possible?
  • tell if the system is in desired state?
  • determine mapping from intention to physical movement?
  • determine mapping from system state to interpretation?
  • perform the action?
  • tell what state the system is in?

(Norman, D. A. (2013). The design of everyday things. New York: Basic Books, a member of the Perseus Books Group.)

The questions are based on seven stages that humans go through when taking an action. When people use our apps, they have a goal in mind that they want to accomplish. They should be able to figure out what actions are available, which ones will lead them to their goal, and how to execute said action(s). Ideally, they should be able to do this without any additional walkthroughs, on-boarding, or tooltips.

They should also be able to accurately predict and evaluate the outcome of their actions. This can be achieved using things like error/success messages, animations, button state changes, view transitions, etc.

An app’s design should clearly communicate what you can do with it. Can you toggle a switch? Press a button? Scroll up and down? All of those things are called “affordances”—things the app allows you to do. 3D Touch is tricky because one of the affordances of a flat piece of glass is NOT that you should be able to exert pressure downward “into” it. Rather, glass, as a material, affords swiping and tapping by its very smooth, flat nature.

AirPods are another example of hidden affordances. Can you tell by their design that you can tap on them to execute additional functions? No. That’s traditionally considered bad design.

State changes are also important. What state is your app in? Is it syncing, or loading something? If you are using a tabbed interface, is it clear which tab the user is currently viewing? These questions are integral to making sure your design is accessible for everyone.

I’m planning to spend the next few weeks updating Nebraska93’s design to be compatible with Dynamic Type. A few days ago I tweeted a video of my successfully re-designed table view cells and I’m really surprised by the amount of favorites and retweets it got. A few people remarked that they didn’t know Accessibility Inspector could adjust Dynamic Type settings on the fly.

My advice to fellow devs is twofold. First, open up Accessibility Inspector and see how your app looks to people who have low vision. Off-hand, I can think of three people that I know in real life who use the larger text size settings, and I suspect it might be way more common than you might think. Second, read The Design of Everyday Things. It’ll make you really mad at a lot of doors and household appliances, but it’s worth it!