Apple has traditionally had trouble with sales in India. While the company started manufacturing iPhones in the country to lower the price locally, it seems that it has a long road ahead of it, according to a report from Bloomberg: it’s sold fewer than a million devices in the first half of 2018.
Bloomberg reports that three Apple sales executives left the company as it restructures its operations there. It only has a 2 percent marketshare in India, and in 2017, it sold 3.2 million iPhones, according to a report by Counterpoint Research. But those sales appear to have slowed: the same report estimates that Apple has moved “fewer than a million devices,” and even with strong sales, it’ll have trouble catching up to last year’s numbers.
India is the world’s third largest market for smartphones, but its high tariffs — adding between 15 to 20 percent to the price — has pushed consumers towards cheaper alternatives, like Samsung. Earlier this summer, Apple began to build the iPhone 6S and the iPhone SE in the country — a tactic that the company hopes will help reduce the price of its phones. But it’ll take a while before Apple’s operations there get up and running at full capacity, and in the meantime, Apple is lagging further behind its competitors.
India could be a huge opportunity for Apple, and CEO Tim Cook has indicated that it’s going to move aggressively into the country. The country has expanded its 4G network and has a growing middle class, which could mean that more people will be willing adopt Apple’s products. Despite those low sales numbers, however, Cook said in May that the company’s revenue from India has grown, setting a record for the first half of 2018.
Motorola only announced its new Moto E5 Play and E5 Plus in April, but this week the company unveiled a new version of that E5 Play that’ll run Android Oreo Go edition. It’ll include fewer pre-installed apps, as well as apps that are optimized to run on devices with less storage. It’s only available in the UK starting on July 14th and starts at £69 for pre-pay or £89 for SIM-only. (That’s between $91 and $117.) This is similar pricing to the E5 Play in the US currently.
The E5 Play is still one of the first Go devices. The first Go phone, the Alcatel 1X, launched in February. Nokia and ZTE also developed Go phones. All these phones are designed to create a more pleasant budget phone experience. The E5 includes some more premium features, including a fingerprint sensor and a 5.3-inch display with an 18:9 aspect ratio.
During today’s World Cup march, the BBC released its first teaser for its upcoming 11th season of its science fiction show Doctor Who, which will feature actress Jodie Whittaker as the Thirteenth Doctor.
The trailer shows off only a brief glimpse of Whittaker’s Doctor, who appears right at the end of the teaser. She is the first woman to play the role of Doctor, taking over for Peter Capaldi, who portrayed the Twelfth Doctor from 2014 to 2017. Fans of the show got their first glimpse of her during last year’s Christmas special, and her casting represents a new era for the show as Chris Chibnall (creator of the acclaimed crime drama Broadchurch) took over for Steven Moffat, the show’s long-time showrunner. The season is expected to premiere in October 2018.
Whittaker’s casting came after years of discussion over whether or not a woman should take over for the role of the show’s most famous character: up until this point, all 12 Doctors were played by men, and while the show’s creators had mulled the possibility, they didn’t feel that it was the right time. Moffatt told the Radio Times in December that it was Chibnall’s call when he took over the show, and said that he thinks that “she’ll be brilliant as the Doctor.”
Doctor Who’s cast and crew will be showing up at San Diego Comic-Con later this week, where we’ll undoubtably learn a bit more about the upcoming season.
In a series of tweets Sunday morning, Musk referred to Unsworth as a “pedo,” requested video of the cave rescue, retracted that request, then promised proof that his submarine could have, in fact, performed the rescue.
It’s a lot.
Never saw this British expat guy who lives in Thailand (sus) at any point when we were in the caves. Only people in sight were the Thai navy/army guys, who were great. Thai navy seals escorted us in — total opposite of wanting us to leave.
Water level was actually very low & still (not flowing) — you could literally have swum to Cave 5 with no gear, which is obv how the kids got in. If not true, then I challenge this dude to show final rescue video. Huge credit to pump & generator team. Unsung heroes here.
The tweetstorm was in response to a tweet from professor and New York Times writer Zeynep Tufekci, who wrote an op-ed arguing that Musk could learn a lesson about Silicon Valley hubris from the incident. (Her thread on celebrity intervention in rescue efforts is a great read.)
It’s not a rational or responsible desire to dream of expensive looking sports cars. But still, the 20th century image of a curvaceous four-wheeled form continues to foster a culture of envy. The high price tag attached to these cars that made them a symbol of aspiration, greed, and everything in between, has endured, as the performance numbers and prices on supercar reach astronomical seven-figure levels. What has been lacking in the rare European supercar category are cars powered by electric powertrains. But the days of the rude gasoline-inhaling performance car may be dwindling.
Pininfarina is the newest Italiano brand to attach itself to the fast-growing list of luxury EV makers, but among the cult of Ferrari enthusiasts, Pininfarina is already a big deal. On Thursday, Automobil Pininfarina, a spinoff of the 90-year-old design house, unveiled teaser images of the PFO, its planned first-ever production car, a 250 mph battery powered hyper car.
In an interview with The Verge, Automobili Pininfarina CEO Michael Perschke says the PFO will have a range of 300 miles on a single charge. “As a super sports car brand, no one has embarked fully on an EV strategy.” He estimates that it will take 10 to 15 minutes to charge the battery up to 80 percent. The performance numbers are dizzying — it will fly from 0 to 60 miles per hour in under two seconds.
Those images of the PFO show a curvaceous, sculptural exterior two-seater carbon fiber form encased in sweeping glass. A single ribbon of light cascades from the headlights across the front end. It’s message is clear — it’s an object of beauty that screams speed. Translation: it’s a truly Italian sports car. It’s the latest smoke signal that the electric future is nigh, pairing screensaver-worthy cars with a Tesla-blazing powertrain, and perhaps eventually spelling the end for gasoline. In January, Ferrari revealed plans to make an electric supercar. These announcements follow on the heels of the reveals of the Porsche Mission E concept and the BMW i8 roadster, and McLaren’s intentions to spend $2.1 billion on electrification.
For a couple hundred potential well-to-do customers that can’t resist this latest sub $2 million dollar proposition, the PFO will be unveiled as a concept car at the 2019 Geneva Motor Show and be delivered in the second half of 2020. The company will start taking orders later this summer when it shows a prototype to select groups at Monterey Car Week where some of the world’s most expensive cars are auctioned, flaunted, and fawned over.
Pininfarina is in its element at Pebble Beach. The prize-winning 1936 Lancia Astura Cabriolet, once owned by Eric Clapton, won top honors at the Pebble Beach Concours D’elegance in 2016. The iconic Turin, Italy based coach builder is responsible for the aesthetic of the world’s most collected cars including the Ferrari Testarossa, as seen in the SEGA game Outrun. Pininfarina namesake and founder Battista Farina was nicknamed Pinin, local dialect for a short guy, the lead designer told me. Farina found a kindred stubborn spirit in the engineer Enzo Ferrari when they first met in 1930. The Pininfarina house also built custom cars for Alfa Romeo, Maserati and Cadillac and has a list of over 1000 cars in its books. More recently, its design arm has been hinting at alternative powertrains. It showed the Nido EV concept in 2010 and the H2 Speed concept, a hydrogen powered raced car in 2016. Its list of star clients includes Jackie Kennedy and the Sultan of Brunei.
The value in that star-studded legacy inspired its current ownership, the Indian-based Mahindra Group, to double down on its historic pedigree and move the name badge of Pininfarina from the side of the car to the front hood of an electric supercar by founding the official Automobiliti brand. Mahindra is one of the driving forces on the Formula E circuit, which holds its New York race this weekend, and is well versed in EV production. The venture was first announced in April. This week, Automobili Pininfarina says that Formula One and Formula E racer Nick Heidfeld will join the as development driver next year as part of its growing leadership team.
But this classic brand isn’t approaching technology as an afterthought. It hopes to strike a note with potential customers in Silicon Valley. “We assume that we appeal to customers like a Larry Ellison or Marc Benioff, who also have an affinity to sustainability and see technology as an advancement to get to the next level,” Perschke says.
Part of its business plan is to seek out partnerships with tech companies to own the hardware inside of the vehicle. “We will not have large department. We’d rather talk to others like Apple, Google, and Salesforce who are into technology, and integrate them rather than do our own systems. OEMs are still defending infotainment architecture. I’m happy to full integrate an iPhone. But do you need to sell infotainment system at a surplus of another $5000?” he says.
The design arm of Pininfarina counts Volvo as one of its past clients, an automaker using a more contemporary approach to its in-car technology. “If you try to be a software company as a car company, per definition, you will always be second,” Pershcke says. “You’re gaining a lot of accessibility and speed in open source systems.The apps are what clients are really interested in.” It’s a very different approach than a car with a similar price tag and mind-numbing performance, the Bugatti Chiron.
But in order to persuade customers to splurge on a two seater performance car, it has to live up to its exclusive reputation, rooted in awe-inspiring form. Pininfarina has credence as the ultimate art car. The Pininfarina-designed Cisitalia 202 was the first car included in the Museum of Modern Art’s permanent collection.
Design Director Luca Borgogno says that building beauty is paramount instead of sticking to the design adage of form following function. “We want to make a car that is not overdesigned. We want something that is super clean and impossibly simple.” As it is primed to grow, the 10 person design team is borrowing members from Pininfarina SpA, the company’s traditional design house. That branch of the company recently showed another high end transportation project: the Princess R35 Performance Sports Yacht.
Pininfarina plans to incorporate sustainability into its design appeal, modeling itself after Stella McCartney’s approach to materials in her high-end fashion brand. That vision includes materials that have been ethically sourced, natural woods, and paints without chemical ingredients. “The constraints are there, what is good is we live in the moment we can work a lot between human and machine. We want to make a big statement as well,” says Borgogno.
To build its cars, Perschke says it will share a factory with a few EV companies, suppliers, and assembly partners. Other vehicles are planned to follow the PFO, and the company will investigate hydrogen-powered vehicles, as referenced in the H2 concept. It’s also working on plans to repurpose its batteries. “For future cars we want a second life strategy,” Perschke says. “In 2023 to 2025 we will be perceived as a sustainable luxury brand.”
To make it to the next car, first, the over-the-top PFO needs to capture the hearts of discerning customers. If they are successful, it may be a sign the culture associated with unapologetic gas guzzling engines is dwindling, an impact that could ripple into more affordable spaces as battery technology and lightweight materials become more affordable. But at this level, the ridiculous price tag is part of what makes the car so appealing to the high-rolling car collectors.
I’ve been using Apple’s new MacBook Pro for two days, and so far the most noticeable change is the keyboard.
The 2018 MacBook Pro is the first laptop from Apple to brandish the third generation of the company’s “butterfly” keyboard design that replaced the chiclet-style keyboards of the silver MacBooks that came before. Apple says the only improvement is that the keyboard is quieter to type on.
From teardowns, we know that there’s likely more to that story. A new layer of silicone appears to both act as a cushion the keys as well as protect them from dust and other particles. That could in turn improve the reliability of the keyboards, which has been a source of major concern. Apple is facing multiple lawsuits over the issue, and this upgrade could in fact be a “secret” way to address it while not admitting there was a problem in the first place.
But for users, Apple’s legal side-step is totally beside the point — they just want to know how the keyboard feels. Well, it feels… better.
It definitely still feels like a butterfly keyboard. If my eyes were closed, and you put this keyboard in front of me, I’d call it as a MacBook Pro butterfly right away (and clearly not a skinny MacBook keyboard, since the shake of the machine itself is different as you type). But it’s not quite like before.
For most of the rest of this article, you’ll need to forgive my frequent use of minimizing language like “slightly” and “a bit” because the change is really subtle.
The “give” on each key feels just a hair stronger. The keys — at least the letter keys — are a little more ready for your fingertips than the previous generation butterfly. The bounce makes the overall feel just slightly closer to the old-style chiclet keys, but not so much that you’d mistake it for one.
Are they quieter? Yes. Certainly, the volume of your keyboarding is as much dependent on your typing style as they keyboard itself, but after switching back and forth from the previous-gen MacBook Pro, typing various sentences again and again, I can safely say the new keys will be a bit more forgiving.
Average typing noise is a difficult thing to quantify (although we’ll give it a go in our upcoming review), but it feels as if the extra silicone layer is doing its job as a cushion as well as stabilizing the horizontal travel of the keys a bit. On the previous Pro, it always felt as if there wasn’t much holding the keys in place besides the aluminum casing itself; as a consequence, if you would hit a key off-center, you could kind of feel that part of the key hitting bottom at a slight angle, which tended to be a “noisier” tap.
Again, these are the subtlest of details in a typing experience, and I by no means mean to say that typing was bad or unbearable on the previous Pro. But subtleties add up, and, for me, the sum was a lesser experience on the butterfly MacBook Pro than that of my workhorse machine: a silver 2015 MacBook Pro with chiclet keys.
Typing on that keyboard is an absolute joy — the kind of attention-to-every-detail experience Apple stakes its brand on. Although the new MacBook Pro hasn’t quite matched it, it has moved a step closer. Yeah, it’s kind of insane that Apple moved away from what many considered perfection in the first place, but if you’ve been holding out from upgrading because of an aversion to the butterfly, the Pro’s new keyboard is reason to pop out of your cocoon and give it a try.
Twitter has suspended two prominent accounts linked to the 2016 hack on the Democratic National Committee, Guccifer 2.0 and DC Leaks. The move comes after the Justice Department handed down 12 indictments against 12 Russian intelligence agents, which specifically named the accounts as part of the country’s propaganda efforts during the 2016 presidential election.
A Twitter spokesperson told the San Diego Union Tribute that the accounts were suspended for being “connected to a network of accounts previously suspended for operating in violation of our rules.” The Justice Department indictments allege that both accounts acted as fronts for agents in Russia’s Main Intelligence Directorate (GRU), and were responsible for conducting cyberattacks against state election boards, secretaries of state, election software providers, and the Democratic National Committee, in an effort to gather information and leak damaging information during the election. In June 2016, Guccifer 2.0 pointed its followers to DC Leaks, which had released e-mails stolen from the DNC earlier that year.
The accounts have been unused for over a year and a half: and while both accounts had been suspended inthe past, those suspensions were only temporary, seemingly for posting personal information, which violates Twitters Terms of Service.
Last month, Nintendo announced a contest that tasked fans with creating musical instruments and games out of its Labo kit. Today, the Japanese developer revealed the winners of the contest, and, naturally, Nintendo fans went all out.
Released in April 2018, the Nintendo Labo is aimed at children to teach concepts such as programming and engineering. Players take kits to build things such as cardboard robots and toy fishing rods, though the Labo can also be used for experiments and new creations.
Take the Labo piano decked out with Zelda decorations up top, for example. Not only do the decorations include a Master Sword, but there are also tiny Koroks hiding in the landscape as well. Its creator, Chris Brazzell, says that various pieces adorning the set were constructed with clay and origami. It also includes an IR sticker that makes it possible for the Labo to do something special when the Master Sword is pulled out. (Brazzell has not set a specific functionality for it yet, but it’s a nice touch.)
Perhaps the most impressive invention comes from Momoka Kinder, who created an accordion that’s powered by sunlight. She built the accordion with simple objects, such as tissue boxes and rubber bands in an effort to make it easy for other people to re-create the project. According to Kinder, the accordion plays a sound when the notes detect that you are blocking the sunlight on the buttons. Here’s a demonstration by Kinder, who also shows off the programming that makes everything function:
Smartphones may have smaller sensors and lenses than DSLRs, but what the cameras in our pockets lack in hardware, they can (sometimes) make up for with software and computing power — as well as tweaks to that tiny hardware. Portrait mode is now a common feature on most smartphones, but what exactly does it do? Is it just another catchphrase to get you to pay more money for a phone, or does portrait mode really capture better photos?
While the technology behind the camera feature differs between smartphones, portrait mode is a form of computational photography that helps smartphone snapshots look a bit more like they came from a high-end camera. Here’s how portrait mode works.
What is portrait mode?
Portrait mode is a feature in quite a few smartphones that helps you take better pictures of people by capturing a sharp face and a nicely blurred background. It’s specifically made to improve close-up photos of one person — hence the name portrait (though you can use it for objects). Portrait mode started as one of the scene modes you typically find on a digital camera, but now the feature has been adapted to smartphone photography. While both the portrait mode on a digital camera and the portrait mode on a smartphone may share the same name, they vary drastically in howthe image is taken.
Portrait mode is a form of computational photography that artificially applies blur to the background.
When first offered as a photo mode on digital cameras, portrait mode helped novice photographers take better portraits by adjusting the camera settings. The aperture, or the opening in the lens, widens to blur the background. A blurred background draws the eye to the subject and eliminates distractions in the background, so wide apertures are popular for professionally shot portraits. Over time, additional optimization was added in, such as improving the processing to make faces even clearer by eliminating red eye and adjusting the autofocus.
A smartphone camera, however, cannot adjust those settings to take a better portrait. For starters, the aperture on most smartphone cameras is fixed, so you can’t actually change it (the Samsung Galaxy S9 and S9 Plus are notable exceptions). Even on the few models that allow for an adjustable aperture, however, the lens and sensor inside a smartphone camera are too small to create the blur that DSLRs or mirrorless cameras are capable of capturing.
Smartphone manufacturers can’t fit a giant DLSR sensor inside a smartphone and still have it fit in your pocket — but smartphones have more computing power than a DSLR. That difference is what powers a smartphone’s portrait mode. On a smartphone, portrait mode is a form of computational photography that artificially applies blur to the background to mimic the background blur of a DSLR. Smartphone portrait mode relies on a mix of software and hardware.
Blurring the background of a photo is tougher than it sounds — for starters, the smartphone needs to be able to tell what’s the background and what’s not in order to keep the face sharp. Different manufacturers have found different ways to determine what to blur and what to leave sharp, which means that, brand by brand, smartphone portrait modes can look considerably different.
If you really want to learn how portrait mode works on modern smartphones, it’s important to understand the tricks phone manufacturers use to enable this feature.
How phones make portrait mode work
Apple was widely recognized as fueling the portrait mode trend when it introduced the feature on the iPhone 7 Plus in 2017. It proved popular, and a number of other manufacturers began releasing phones that included their own portrait optimization. Let’s break down the different methods used to blur that background:
Two-lens depth mapping
The original smartphone portrait mode requires a dual-lens camera. Depth mapping uses both the telephoto lens and the wide angle lens on a smartphone to examine the same visual field and compare notes. These two different viewpoints can work together to create a “depth map,” or an estimation of how far away objects in the shot are. With the depth map, the smartphone can then determine what’s the background and what’s not.
Combined with face detection technology, the phone runs the image through a blurring algorithm that attempts to blur the background and highlight the face. This is the technique used by the latest Samsung Galaxy phones, and the latest iPhone devices, including the iPhone X.
Rather than requiring two lenses, pixel splitting requires a specific type of camera sensor and just one lens. Instead of creating a depth map using two different lenses, this technique creates a depth map from two different sides of the same pixel. In smartphones with dual-pixel autofocus, like the Google Pixel 2, a single pixel actually has two photodiodes. Just like with a dual-lens camera, the software can compare the slightly different views from both sides of the pixel to create a depth map. The camera can then use the depth map without needing to consult an image from a separate lens and apply blur. Phones with this capability can take portrait mode photos from the front-facing camera as well, which may be better suited for your selfies.
Software-only portrait mode
Ideally, portrait mode uses a mix of hardware and software for the best results. But what if you can’t control the hardware? Apps designed to work on multiple devices use artificial intelligence and facial recognition to guess where the person is and where the background is. The result isn’t as accurate as methods that use both hardware and software because there’s no depth map, but this type of portrait mode is available from a wider range of smartphones. Instagram has a version of portrait mode inside the built-in camera that it calls Focus.
What’s the difference?
Because the software and algorithms used for these techniques can differ, you can still wind up with different results for any of these methods. How different? The Unlockr took a look for us, comparing the Galaxy Note 8, iPhone X, Huawei Mate 10 Pro, and Pixel 2 XL. Note the shading and background differences for these shots, and you can see that there are differences in how portrait mode performs on different models.
Along with getting different results, different devices will have distinct features. Becuase the Pixel 2 doesn’t need two lenses, the portrait mode works with both the rear-facing and front-facing cameras. The iPhone X can also use portrait mode on the front facing camera, but by using a 3D depth map from Face ID.
The bottom line on portrait mode
The best portrait modes are images from interchangeable lens cameras because of the aperture control and larger sensors — no other portraits will look as good. However, computational photography allows smartphones to come closer than ever before by artificially blurring the background. The Pixel 2 XL appears to take the best portrait photos, thanks to intelligent software and one of the best smartphone cameras around. The iPhone X also performs well, although it has a tendency, in our experience, to darken images a little.
While portrait mode differs between models, the biggest difference is between a phone with portrait mode, and one without. Without the hardware to create a depth map, portrait modes can’t quite reach the same level of realistic background blur. If you snap a lot of images of people, portrait mode makes for a dramatic improvement in photo quality, even coming from a smartphone. That difference is enough to warrant opting for a particular phone.
When I was younger, I had a soccer coach who stressed the importance of anticipation. “An-tiiii-ciiiiiii-PAY-shun,” he’d yell at us, while we were diving around for the ball. If we did it right, he promised, we’d be able to do in soccer what Neo does in The Matrix — not, like, stop bullets, but be in the right place at the right time to stop an attack on our goal. I wasn’t too great at it, at least not at first.
But the lesson stuck. I can hear coach’s voice even now, when I navigate the crush of travelers during New York City’s all-too-frequent rush hours. This is all to say that prediction is key; it’s the difference between getting the ball in the back of the net and whiffing entirely, the gap between getting a seat on a crowded train or having to wait, chastened, for the next one. And, as I recently learned, prediction is the difference between a YouTube video and glitch art.
The other day I came across a Twitter bot, @youtubeartifacts, which tweeted out screenshots and clips from random YouTube videos — but images and videos were bitcrushed and pixelated and kinetic, more abstract painting than encoding error.
There’s a name for this kind of glitched-out aestheticism, and it turns out to have a well-established artistic past. “The bot uses my own variation on an old glitch art technique called ‘datamoshing,’ which basically generates a specific kind of h264 compression glitch which creates the smeared, pixelated sometimes painterly artifacts you see in the output,” says David Kraftsow, the artist behind @youtubeartifacts. (H.264, also known as MPEG-4 Part 10 or Advanced Video Coding, is a video compression standard — for recording, compression, and distribution — widely used across the internet since around 2014, which provides better video quality than earlier ones.)
“It’s actually a somewhat old glitch art project of mine that’s gone through a lot of iterations, the most recent of which is the Twitter bot,” Kraftsow writes to me in an email. It started as a website in 2009, where anyone could enter a YouTube URL and see specific glitch effects in their browser — but it was hard to maintain, Kraftsow explains, which meant it didn’t last very long. Then, the curators of digital art collective Rhizome asked him to create a more robust version: a desktop app.
“I refashioned the site and had it look specifically for “vlogger” content to generate stills,” he said. “Then a few years ago” — Februrary 2015 — “I made the app into a Twitter bot, which itself has gone through a few versions. The most recent of which generates 4K imagery from a convoluted youtube search that looks for (among other things) vloggers, beauty/cosmetics vids, sports, and nature/landscape videos.”
As Kraftsow mentioned, datamoshing is a type of glitch art — which, in the context of art history, can be broadly defined as art created by corrupting or otherwise manipulating an existing file — that has roots in the net art movement of the early aughts. One of the most influential examples of the technique was a 2003 video called “Pastell Kompressor,” by the artists Owi Mahn and Laura Baginski. “As basis for ‘pastell compressor’ we have been using time- lapse shootings of clouds drifting by, which we took on the plateaus in the south of france [sic],” they wrote. They ran it through a proprietary codec, called “sörensen- 3,” which blended the French plateaus with a person’s figure. Two years later, the artist Takeshi Murata created “Monster Movie,” which blended footage from a 1981 B-movie and a heavy soundtrack and which is now in the permanent collection at the Smithsonian as perhaps the most influential piece in the datamosh canon. In 2009, Kanye West would use the technique in his video “Welcome To Heartbreak.”
Conceptually, datamoshing is pretty easy: To create the most basic version of those dramatic, pixelated effects, all you have to do is take advantage of how videos are encoded. Essentially, there are three kinds of frames, which store compressed images: I-frames, P-frames, and B-frames. As an excellent tutorial has it, I-frames are “inter frames,” which means they contain the frames’ image data. P-frames are “predictive frames,” which hold abstract information — essentially, they store data for how the video’s pixels move, and nearly nothing else. (B-frames are a little different, because they’re like predictive frames but they’re bi-directional; they don’t have much to do with glitching.) So, to datamosh, all you do is delete the I-frames. Delete the image data — all the identifiable, still images of the video — and you’re left with the abstract, interior information that populates the space between images. You just leave in the ann-tiii-ciii-PAY-shun, the predictions, which on their own produce the hallmark swirl of glitchy pixels that visually define a datamoshed video. Simple, right?
I decided to try it for myself, starting with something familiar: Verge Science’s excellent video on graphene that came out earlier this week. I cut the video down to 45 seconds using iMovie, which felt like a manageable enough length, then I ran it through Avidemux version 2.5.4 (a free, popular video editor) to delete my I-frames; then I used VLC (an excellent video player) to play back my results. (A good rule of thumb about I-frames is that, because they’re anchor points, they exist at just about every cut. Avidemux identifies them for you — just press the up and down arrow keys to scroll through every single one in a video.)
It took me six attempts and nearly an hour to get from the first 45 seconds of this…
It was a little harder than I thought. But I persevered. I believed in my P-frames. Eventually, I got this.
It’s like my soccer coach might say: Perseverance is just as important as figuring out where your pixels are going.