TomTom reports better-than-expected third-quarter core earnings

AMSTERDAM (Reuters) – TomTom (TOM2.AS), the Dutch maker of digital mapping software, on Tuesday reported better-than-expected third-quarter core earnings of 62.4 million euros ($72.2 million), compared with 35.5 million euros a year earlier.

FILE PHOTO: TomTom navigation are seen in front of TomTom displayed logo in this illustration taken July 28, 2017. REUTERS/Dado Ruvic

Company-compiled consensus had seen earnings for the quarter before interest, taxes, depreciation and amortization (EBITDA) at 41 million euros.

The company raised its full-year revenue outlook to 850 million euros from 825 million euros, but said a contract announced in 2016 to provide location and navigation services to Volvo had been ended.

Reporting by Toby Sterling; Editing by Subhranshu Sahu

Anand Giridharadas on Saudi Money and Silicon Valley Hypocrisy

Silicon Valley’s deep financial ties to Saudi Arabia illustrate “the hypocrisy behind the ‘change the world’ fantasy” pushed by tech companies, said journalist Anand Giridharadas. Saudi backing for popular apps like Uber, Slack, and Wag offers proof that “the most idealistic companies on earth—in rhetoric—are very happy to take the dirtiest money on earth to grow and grow and grow,” he said.

Giridharadas, author of Winners Take All: The Elite Charade of Changing the Word, spoke at the WIRED25 festival on Sunday, on a panel about the trouble with techno-utopianism. He argued that the uproar around the disappearance of journalist Jamal Khashoggi, who was allegedly killed by Saudi agents last week, forces the tech industry to face the reality of the Saudis.

The relationship has worked well for the Saudis, Giridharadas said, who have financed popular apps as “a form of influence peddling” to distract people from things like the way oil contributes to climate change.

However, in light of the graphic details that have emerged about Khashoggi’s alleged murder, Silicon Valley can “no longer hide behind an idea that it’s another player in Davos in the Desert,” he said, referring to an upcoming festival in Riyadh arranged by the Saudi government. Several tech luminaries scheduled to speak at the summit have dropped out following Khashoggi’s disappearance and possible murder. But there’s been no reckoning with the billions the Saudi government has funneled into tech companies through its Public Investment Fund.

Anand Giridharadas

Amy Lombard

The panel was moderated by Virginia Heffernan, an author and contributor to WIRED, who quickly challenged Giridharadas on the idea that anyone came to Silicon Valley to associate themselves with repressive regimes. Heffernan offered her own brief experience with the Saudi government as an instance of good intentions. Years ago, Heffernan said she was paid about $24,000 for two speaking gigs in Saudi Arabia, even though the sessions were later cancelled. Perhaps receiving such a large sum, roughly a quarter of what she made while she had been on staff at the New York Times, colored her view of the regime. “I suddenly thought Saudi Arabia is not that bad,” she said.

“I think that that’s what the VCs think,” Heffernan said. “Suddenly the money’s flowing and yet we’re beholden to them.”

Giridharadas agreed. “The winners of our age are not bad people. They’re not evil people. They are there people motivated, as they ought to be under the system that we have, by the pursuit of profit. And that makes them very good at a bunch of things like building businesses and creating things and inventing things,” he said. But what his book Winners Take All explores is the way that pairing the pursuit of profit with the rhetoric of social change has led us to a place where we look to the same tech leaders funded by the Saudi to save the world.

“How did we decide to outsource the improvement of the human condition to those people?” Giridharadas asked. “The Saudi thing and your experience illustrate [that] it’s not bad people, but it’s just people who are ill-positioned to balance the voice of greed with the voice of the good.”

More Great WIRED Stories

Snapchat Adds Cat Lenses So You Can Put Filters On Your Cat

Snapchat has had its signature filters to make selfies pop a little extra for awhile, but now even pets can take advantage of the fun.

The messaging app just unveiled its new Cat Lenses feature. Cat Lenses allows you to put filters on your cat, which was previously reserved only for human faces, of course. You can even include yourself in the photo and use matching filters on both you and your cat. Snapchat announced the update on Twitter with the caption, “Lenses. For cool cats and their cool cats Try them meow.”

The update builds on the object recognition software added to the app last year, according to TechCrunch. That technology allowed you to identify or bring up a sales page for an object.

Cat Lenses are just the latest of Snapchat’s seemingly unending ideas. Earlier this week, Snap said it would bring original programming to its signature app including scripted shows and docuseries.

Google Fuchsia: Here's what the NSA knows about it

More Google

A while back, Google told us Fuchsia is not Linux. There have also been endless rumors, with little hard proof, it will eventually replace Android. Other than that, we don’t know much. But the National Security Agency (NSA), of all groups, has been checking into Fuchsia and revealed its findings at the recent North American Linux Security Summit in Vancouver, B.C.

Also: Pixel 3, Google Home Hub and Pixel Slate: Everything Google just announced CNET

Fuchsia is a modular operating system

James Carter and Stephen Smalley of the NSA showed off some Fuchsia secrets. Their focus was on security in Fuchsia and Zircon, its underlying micro-kernel.

Zircon started as a fork from the Little Kernel, the Android bootloader. It’s been heavily modified to become a micro-kernel operating system. It now includes a small set of userspace services, drivers, and libraries. These are used to boot the system, talk to hardware, load userspace processes and run them, and not much more. The kernel manages several different object types. Those that are directly accessible via system calls are C++ classes. Fuchsia is built on top of this.

It’s a modular operating system. This implies you’ll be able to use it on low-powered, minimal-resource devices all the way up to PCs. You simply add the object modules for more functionality.

It looks like Unix/Linux

Fuchsia also supports a subset of Portable Operating System Interface (POSIX) conventions.

This means, from a developer’s viewpoint, it looks like Unix/Linux. Fuchsia uses Google’s Flutter as its software development kit (SDK). With it, you can build Chrome OS and Android apps. Fuchsia also supports Apple’s Swift language .

Also: Google Home: A cheat sheet TechRepublic

Numerous security issues

Smalley and Carter’s job is to investigate operating systems and software for potential use in national security jobs. In short, to see if it’s easy to break. The NSA doesn’t want the government using fragile systems.

Carter also helped create SELinux, the most secure approach to running Linux. In checking out Zircon and Fuchsia, they shared their discoveries about the operating system.

First, they found that Zircon is the only part of Fuchsia that runs in supervisor mode. Everything else — drivers, filesystems, network, etc. — run in user mode. This means programs on Fuchsia will take a very different approach than they do on most operating systems.

While looking deeper, they also found numerous security issues. For example, Carter said, “You can acquire a handle to anything in that job or any child jobs,” and, naturally enough, “a leak of root job handle is fatal to security.”

Much work needs to be done

Fuchsia’s issues are big enough that, as of this summer, it was far from being ready for production. As Carter explained, Fuchsia is very much a “work in progress” system and “a lot of work needs to be done” before Fuchsia is secure.

Compared to Linux, the still-immature Fuchsia is far from secure.

But, Carter remarked, while “much work” needs to be done, it can be made secure, and he encourage open-source developer to help Google lock Fuchsia down.

Immature or not, Fuchsia might soon be running on the forthcoming Google Home Hub.

Also: Google Home Hub says no to smart-home cameras in your bedroom CNET

Is Fuchsia inside Google Home Hub?

Home Hub is a new Internet-of-Things (IoT) device. It’s essentially a Google Home with a 7-inch touchscreen. It includes a fabric-encased full-range speaker, a light sensor, and two far-field microphones. It doesn’t include a video camera. But, under the hood, it will sport a Amlogic S905D2 CPU instead of a Qualcomm SD624 SoC.

The good people at 9to5Google, who have been covering Fuchsia like hawks, put two and two together and started digging into the Google Home Hub’s source code. They found traces of Fuchsia. Now, this doesn’t mean it will arrive under your Christmas tree running Fuchsia, but it might!

Will you want to? No, based on what the NSA found, I’d say not. But, if you want to tinker with Fuchsia, it might be worth getting the new Google Home Hub.

Related stories:

Why Someone Put a Giant, Inflatable Bitcoin Rat on Wall Street, Facing the Federal Reserve Bank

Bitcoin was created in part out of a distrust of centralized authorities like the Federal Reserve. Now a symbol of the cryptocurrency’s growing threat to the Fed stands on Wall Street: a giant, inflatable rat covered in crypto code.

The bitcoin rat, first noted on Reddit, was created by Nelson Saiers, an artist and former hedge fund manager, according to Coindesk. The art installation, which appeared earlier this week and is temporary, is intended as much as a tribute to bitcoin’s creator Satoshi Nakamoto as much as it is a condemnation of the Fed and critics of cryptocurrencies.

“The sculpture’s supposed to kind of reflect the spirit of Satoshi and what he’s trying to do,” Saiers told Coindesk, who noted the rat image was inspired in part by another titan of traditional finance. “Warren Buffett called bitcoin ‘rat poison squared’ but if the Fed’s a rat, then maybe rat poison is a good thing,” he said.

Fed officials have made comments on cryptocurrencies that range from the critical to the conciliatory. Last December, former Fed Chair Janet Yellen called it a “highly speculative asset” that “doesn’t constitute legal tender.” In April, one Fed official claimed bitcoin couldn’t replace the dollar, while another conceded it’s “like regular currency” in that it has no intrinsic value.

Inflatable rats have become a staple of union protests during the past quarter century, so much so that a few companies specialize in renting them out to organizers. “Rat” is not only an epithet thrown at nonunion contractors, it symbolizes greedy, unscrupulous behavior ascribed to companies opposing unions.

“This is a very iconic image for protest,” Saiers told blockchain news site Breaker. “Somewhere in the heart of bitcoin is a bit of protest of big bank bailouts.”

That idea appeared to be lost on some Redditors, who claimed they spotted the bitcoin rat in the wilds of Wall Street but didn’t immediately see its significance. “I walked past it today,” one wrote. “Had no idea it was about Bitcoin.” “It’s cool, but people walking by won’t understand it,” said another. “I don’t even understand it. Needs a BTC logo or something.”

Soyuz Rocket Failure Jeopardizes Future ISS Missions

A NASA astronaut and a Russian cosmonaut were forced to make a dramatic landing after their ride to space, a Russian Soyuz rocket, failed minutes after takeoff. The incident caused the crew to initiate emergency abort procedures, landing a few hundred miles away from the launch site. Both Nick Hague and Alexey Ovchinin are safe.

The crew launched from the Baikonur Cosmodrome in Kazakhstan at 4:40 am ET and was scheduled to dock at the ISS six hours later. But about two minutes into the flight, the Soyuz suffered an unspecified failure and the onboard computer initiated the abort. “There was an issue with the booster from today’s launch,” a NASA spokesperson says. “The Soyuz capsule returned to Earth via a ballistic decent, which is a sharper angle of landing compared to normal.”

Dmitry Rogozin, head of Roscosmos (Russia’s space agency) has announced that all crew missions will be put on hold for the foreseeable future while the agency investigates the failure. The Russian state corporation, along with NASA are already analyzing data to determine what caused the anomaly. “NASA Administrator Jim Bridenstine and the NASA team are monitoring the situation carefully,” the space agency said in a statement following the mishap. “NASA is working closely with Roscosmos to ensure the safe return of the crew. Safety of the crew is the utmost priority for NASA. A thorough investigation into the cause of the incident will be conducted.”

This incident marks the first failure for the Russian human spaceflight program since 1983 when a Soyuz exploded on the launch pad. (The two Soviet cosmonauts on board, Vladimir Titov and Gennady Strekalov, were able to jettison to safety). But it’s also the second mishap in recent months for Russia’s trusty Soyuz. In August, the crew members onboard the space station discovered an air leak originating from one of the Soyuz capsules that was docked with the orbital outpost. The leak was eventually traced to a tiny hole in the Soyuz’s orbital module. Crew members were able to repair the ship and no one was in any danger, however, the leak has been a source of controversy as officials work to determine how the hole was made. Russian media outlets have tried to suggest on-orbit sabotage, implying that one of the crew members on board ISS intentionally drilled the hole. NASA has refuted those claims and Bridenstine is currently in Russia for the launch as well as to meet with Russian space officials.

Currently, the Russian Soyuz spacecraft is the only vehicle capable of ferrying crews to the ISS. In 2011, NASA’s fleet of space shuttles was retired, leaving the agency (and others around the world) dependent upon Russia for access to space. Commercial companies like SpaceX and Boeing are building NASA’s next generation space taxis, but they are not yet ready to fly. (The first flights of SpaceX’s Crew Dragon and Boeing’s Starliner crew capsules are expected to take off next year).

This failure raises serious questions about the future of the International Space Station, as the Soyuz spacecraft (and rocket) are the only means by which crews can reach it. It is not clear how long the Soyuz vehicle will be grounded, or how long the current crew—American astronaut Serena Auñón-Chancellor, German Commander Alexander Gerst, and Russian cosmonaut Sergey Prokopyev— can remain in orbit. They’re scheduled to come home on December 13, although it’s likely their mission will be extended.

Their scheduled replacements (cosmonaut Oleg Kononenko, Canadian astronaut David Saint-Jacques and NASA astronaut Anne McClain) were slated to launch on Dec. 20, but as of now their flight is uncertain pending the outcome of this investigation. NASA is still working out the plans going forward concerning both the crew and space station. While the agency can run the space station from the ground, agency officials prefer to have crew onboard, resulting in an extended stay in space for Auñón-Chancellor, Gerst, and Prokopyev. Supplies on board are ample so the crew is in good shape in terms of consumables.

Transportation, however, may be a bit trickier. Each Soyuz spacecraft is only certified to stay docked to the space station for approximately 200 days. With their lifeboat’s shelf life set to expire in January 2019, the crew could either be stranded or forced to abandon the space station. Both the rocket and the spacecraft to be used for the next launch are nearly ready to fly, however, so it’s entirely possible that the next Soyuz could launch without people on board, serving as an extra lifeboat to fetch the current crew.

More Great WIRED Stories

Microsoft's patent move: Giant leap forward or business as usual?

When Microsoft surprised everyone by releasing its entire 60,000 patent portfolio to the open-source community, someone asked me if I thought the move would finally convince everyone Microsoft is truly an open-source friendly company.

“Oh no,” I replied.

Must read: Microsoft open-sources its patent portfolio

Sure enough, some folks are still convinced that Microsoft is intending to “embrace, extend, and extinguish” open source. Many others believe, however, that Microsoft has truly evolved and has become an open-source company.

Is it a trap?

On the purely positive side, we have Jim Zemlin, The Linux Foundation‘s executive director:

“We were thrilled to welcome Microsoft as a platinum member of the Linux Foundation in 2016 and we are absolutely delighted to see their continuing evolution into a full-fledged supporter of the entire Linux ecosystem and open-source community.”

Patrick McBride, Red Hat‘s senior director of patents added, “What a milestone moment for open source and OIN! Microsoft is joining a unique shared effort that Red Hat has helped lead to bring patent peace to the Linux community. Developers and customers will be the beneficiaries. Now is a perfect time for others to join as well.”

On the haters’ side, there is Florian Mueller, editor of the FOSSPatents blog, who thinks:

‘Microsoft loves Linux’ is a lie. And now Microsoft wants us to think that Microsoft battles patent trolls. This too is a Microsoft lie.”

He also said joining the OIN, which Mueller considers a pro-patent IBM front group, “imposes no actual new constraints on them.” This is just a cynical PR move from Mueller’s viewpoint.

Also: Open source: Why it’s time to be more open

Other anti-Microsoft die-hards on Reddit, Twitter, and other social networks also insist that this new Microsoft is the same as the old Microsoft. Or, as one person, harking back to Star Wars, remarked: “It’s a trap!”

Microsoft finally gets open source

At Microsoft, the company insists that it has been changing its open-source ways for years. In a recent Open Source Virtual Conference keynote, John Gossman, a distinguished Microsoft Azure team engineer, described former Microsoft CEO Steve Ballmer’s 2001 comment that Linux was “a cancer” as being ” a fundamental misunderstanding of open source.”

Also: Open-source licensing war: Commons Clause

With Satya Nadella as CEO, Microsoft finally gets open source.

What the patent experts are saying…

But it’s not just Microsoft staffers who are saying Microsoft’s attitude toward open-source has evolved. Andrew “Andy” Updegrove, patent expert and founding partner at the Boston-area law-firm Gesmer Updegrove, said:

“While this may seem surprising to those who have not followed Microsoft’s evolution in recent years, it is in fact more a formal recognition of where they, and the realities of the IT environment are today.”

Daniel Ravicher, executive director of the Public Patent Foundation (PUBPAT), whose work was once used by Ballmer against Linux, wasn’t surprised by this move:

“With the

acquisition of GitHub and other things the company is done they’ve really changed their tune in the past 15 years. They also hired as an in-house attorney a former staff attorney of the Software Freedom Law Center (SFLC). It may be like the Korean War that doesn’t have a formal end date, but I think now Microsoft and open-source software are on the same page and working together.”

Prominent open-source attorney and Columbia University law professor, Eben Moglen, also sees this as a move towards patent peace. Moglen remarked:

“Microsoft’s decision signals the transition from the period of patent war to the making of industry-wide patent peace for free and open-source (FOSS) software. Microsoft’s participation in the OIN licensing structure will be the tent pole for the extension of OIN’s big tent across the world of IT. For SFLC and other parties whose job it is to secure the interests of individual FOSS programmers and their non-profit projects, this is also the moment of opportunity to ensure their safety and respect for their mode of development across the entire industry, including by companies who continue to engage in patenting their own R&D.”

Also: Open source is 20: How it changed programming

Why is Microsoft doing this when it makes money from patents?

Scott Guthrie, Microsoft’s executive vice president of the cloud and enterprise group, described the decision as a “fundamental philosophical change” — resulting from an understanding that open-source is inherently more valuable to Microsoft than patent profits.

John Ferrell, chair at the Silicon Valley technology law firm Carr & Ferrell, thinks there may be a more pragmatic reason behind Microsoft’s move:

“Microsoft’s gesture to donate 60,000 patents to the OIN is indeed a philosophical change for this giant, but the change likely is rooted in the realization that the Company is much better suited to fight in the marketplace rather than to fight in the courtroom. Virtually every patent-owning company that gets into a patent battle with Microsoft is fighting from a position of asymmetrical advantage. Where damages are based on a percent of sales, Microsoft almost always has more to lose. Especially companies that leverage open-source software, these companies tend to be small and patent infringement for Microsoft is difficult and expensive to police.”

Ferrell, the litigator, continued:

“From a defensive standpoint, small companies with one or two patents arguably infringed by Microsoft are especially annoying and potentially damaging to this goliath. Microsoft is a huge target and is constantly barraged with patent lawsuits by small and large companies trying to gain a foothold or monetize their development efforts at the expense of Microsoft’s deep pockets.”

An additional reason for Microsoft’s change of heart, according to Rafael Laguna, CEO of Open-Xchange, an open-source network services company, is:

“Microsoft boss Nadella wants to buy new credit in the open-source industry, distancing the company from the business model and practises of his predecessors, i.e. Gates’ and Ballmer’s sincere dislike of open source developers” Nadella, however, “recognizes that Microsoft’s future revenue will come from providing cloud services, rather than selling operating system licenses. And for cloud services, Linux is now the operating system of choice – underpinned by the fact that already

half of the Microsoft Azure services are based on Linux today.”

Also: Open-source vulnerabilities which will not die: Who is to blame?

Will this bring peace to our time?

Bradley Kuhn, president of the Software Freedom Conservancy (SFC), appreciates Microsoft joining OIN patent non-aggression pact, noting: “Perhaps it will bring peace in our time regarding Microsoft’s historical patent aggression.”

Microsoft needs to do more, Kuhn added, “We call on Microsoft to make this just the beginning of their efforts to stop their patent aggression efforts against the software freedom community.”

Specifically, he said, “We now ask Microsoft, as a sign of good faith and to confirm its intention to end all patent aggression against Linux and its users, to now submit to upstream the exfat code themselves under GPLv2-or-later.”

Exfat, a file system, was open-sourced by Samsung with the SFC’s help in 2013. But Kuhn said, “Microsoft has not included any patents they might hold on exfat into the patent non-aggression pact.”

In general, it should be noticed, when asked about FAT-related patents, Erich Andersen, Microsoft’s corporate vice president and chief intellectual property (IP) counsel, has said:

“We’re licensing all patents we own that read on the ‘Linux system.'” And, in addition, all of Microsoft’s 60,000 granted patents relating to the Linux system are covered by the OIN’s requirements.

In a subsequent e-mail Kuhn noted, “Ultimately, the OIN license agreement is quite narrowly confined to the ‘ OIN Linux System Definition‘ and therefore doesn’t assure that patent aggression must stop immediately; rather, Microsoft is only required to stop for those patents that read on technologies in the OIN Linux System Definition.”

So, for example, BSD specific code, wouldn’t necessarily be covered.

Therefore, Kuhn suggested:

“Expanding the ‘Linux System Definition’ would be a useful way to solve this problem through OIN.”

Historically, OIN has been expanding the Linux System Definition.

Kuhn concluded:

“More importantly, Microsoft can help solve it unilaterally by submitting patches that implement technology from their patents into upstream projects that are already contained in the Linux System Definition. I suggest they start with upstreaming exfat in Linux themselves.

Also: Hollywood goes open source


So, while there are a few people who think Microsoft is up to no good, the experts agree that this is a laudable move by Microsoft to show its open-source bona fides. That’s not to say some still want to see more proof of Microsoft’s intentions, but overall, people agree this is a major step forward for Microsoft, Linux, and open-source intellectual property law regulation.

Related stories:

5 Easy Microsoft Excel Tips That Can Save You 10 Hours a Week

What are your thoughts on Microsoft Excel? In most cases, people would say they either love it, hate it, or are too intimidated to delve into it.

Thanks to the influx of more user-friendly cross-platform applications for data storage, many would also argue that Excel has become far too dated for regular use. However, Excel continues to dominate the business world.

It’s often used for complex analyses, in addition to forecasting models and storing vast amounts of data into a single file. And while there are plenty of applications designed to replace spreadsheets, they often fall short on different arenas, which leaves you with limited options. As simplistic as it may appear, Excel usually offers you more usability and data control.

With a little bit of practice, it’s perfectly possible for most anyone to squeeze the most out of Excel. It’s come a long way since the 80’s. These five tips can be used personally and professionally, and some of them don’t even call for endless rows of digits:

1. Create a custom calculator.

The capabilities of calculations in Excel go far beyond simply adding subtotals to view the grand total. If you find yourself running the same complex calculations over and over again, let Excel deal with it so you can toss your old calculator:

  1. Open a new file, and label fields for what interests you. This can include rate, quarterly periods, present/future value, and payments.

  2. Select the cell you want the result of each of the labeled fields to go to. Click Insert, select Function to open the Insert Function window. Then select “Financial” to view all the functions in the financial calculation.

  3. Double click the labeled field of choice, which will open a function arguments window. Fill in the field numbers as how you labeled them. Click OK and you’re done with the calculator for that label.

  4. Continue with all other labels.

2. Make use of accounting functions.

Excel is fully equipped for loan calculators, financial reports, expense tracking, forecasts, and budget plans. Spare meeting with the accountant and view metrics like revenue, operating profit, interest, depreciation, net profit, and quarterly trends at a glance. Pivot tables can help you create dynamic summary reports from raw data very easily, all in a drag and drop interface:

  1. If you’re doing this on a new spreadsheet, click on cell 1A, then click on the “Number” tab at the top of the page. Under “Format Cells,” select the “Accounting” option. Unless you wish to make additional adjustments, select “OK.” You can deselect showing the currency symbol if you wish at this point.

  2. You can apply this format to a range of cells by selecting the range of cells with a format painter tool.

  3. Built-in formulas that can be applied and tweaked to customize include cash flow and asset depreciation. After applying the formulas, continue creating other formulas that branch off into new column headings, such as date, balance, and amount.

3. Transform numbers into charts and graphs.

All it takes is a few clicks to transform rows and columns of numerical data into charts and graphs, which are far more visual and digestible. It’s a major time-saver for data analysis:

  1. Enter your data into the spreadsheet. For example, A1 could say “Date” and B1 could say “Number of Signups.” A2 and B2 downwards would have the data as it corresponds with one another.

  2. When done, select the top left cell, then while pressing “Shift,” click on the bottom right cell. This will highlight all the data.

  3. Click the “Insert” tab up top, select “Chart” and “Recommended Charts.”

  4. Click a chart option, or click on “All Charts” for additional options.

4. Map out daily calendars and schedules.

You already have software for daily calendars and schedules. Sure. But why turn to many individual pieces of software when one can handle it all?

Use Excel to map out a content calendar for your website and blog. Use it to maintain a PTO schedule of all your employees. Color-coordinate for different categories, so you can get a quick grasp of areas that may need more focus. It’ll help you monitor progress more efficiently:

  1. Conduct a search on schedule templates. This varies greatly depending on which version of Excel you’re using.

  2. Preview the schedule templates, and download the most suitable one to open into a new worksheet.

  3. Alter text/colors as needed and desired, and get right into inputting the data!

5. Fetch live data from the internet.

Excel can automatically update figures–stock prices, FX rates, results of sports games, flight data of airports, and any info in a shared database–from a live data source. It sure beats tedious manual entry on a daily basis.

Note that this functionality, which is called “Get & Transform/Power Query” isn’t available in the 2007 version. Only 2010 and later:

  1. If you’re using 2010, download and install the Power Query Add-In. This is already built into 2013 and later.

  2. Click “Power Query,” (or “Data” > “New Query” > “From Other Sources” > “From Web”)

  3. In the “From Web” box, enter the URL. Provide user credential info if needed from the website itself. Click “OK.”

  4. Power Query will scan the web page, and load the data in the “Navigator Pane” under the “Table View.”

  5. Select the table you want to connect to by clicking it from the list.

  6. Click “Load,” and the web data will be seen on your worksheet.

Exclusive: EU privacy chief expects first round of fines under new law by year-end

BRUSSELS (Reuters) – Regulators are set to exercise their new powers by handing out fines and even temporary bans on companies that breach a new EU privacy law, with the first round of sanctions expected by the end of the year, the bloc’s privacy chief said.

FILE PHOTO: An illuminated Google logo is seen inside an office building in Zurich September 5, 2018. REUTERS/Arnd WIegmann/File Photo

The European Union General Data Protection Regulation (GDPR), heralded as the biggest shake-up of data privacy laws in more than two decades, came into force on May 25.

The new rules, designed for the digital age, allow consumers to better control their personal data and give regulators the power to impose fines of up to 4 percent of global revenue or 20 million euros ($23 million), whichever is higher, for violations.

Enforcers have since then been deluged by complaints about violations and queries for clarification, with France and Italy alone reporting a 53 percent jump in complaints from last year, European Data Protection Supervisor Giovanni Buttarelli said.

“I expect first GDPR fines for some cases by the end of the year. Not necessarily fines but also decisions to admonish the controllers, to impose a preliminary ban, a temporary ban or to give them an ultimatum,” Buttarelli told Reuters in an interview.

FILE PHOTO: The Facebook logo is shown at Facebook headquarters in Palo Alto, California, U.S., May 26, 2010. REUTERS/Robert Galbraith/File Photo

Data controllers, which could include social networks, search engines and online retailers, collect and process personal data while a data processor only processes the data on behalf of the controllers.

Fines are levied by national privacy regulators in the various EU member states. While Buttarelli does not personally impose fines, he coordinates the work of privacy agencies across the bloc.

Fines could be imposed on any company that operates in Europe, no matter where it is headquartered.

“The fine is relevant for the company and important for the public opinion, for consumer trust. But from an administrative viewpoint, this is just one element of the global enforcement,” Buttarelli said.

He said the sanctions will be imposed in many EU countries and will hit many companies and public administrations but declined to provide details because investigations were still ongoing.

Complaints filed against Google (GOOGL.O), Facebook (FB.O), Instagram and WhatsApp by Austrian data privacy activist Max Schrems on the same day the GDPR rules were implemented are not expected to be among these cases as they are still at a preliminary stage, he said.

Buttarelli also urged EU countries and lawmakers to bridge their differences on overhauling the e-privacy directive which aims to create a level playing field between telecoms operators and online messaging and email services such as WhatsApp and Microsoft (MSFT.O) subsidiary Skype.

Hailed by privacy activists but criticized by tech companies and some EU countries as being too restrictive, the e-privacy proposal aims to extend tough telecoms privacy rules to the tech giants.

“E-privacy is simply indispensable. It is essential, it is a missing piece in the jigsaw of data protection and privacy. It would be really a dereliction of duty if the EU cannot update soon before the (European Parliament) elections its rules on confidentiality of communication,” Buttarelli said.

Parliament elections are in May 2019.

“I think there is a margin of maneuver for sustainable compromise although there are points which cannot be negotiated. For instance the scope of application of e-privacy to over-the-top, beyond the telcos, the tech giants,” he said.

Over-the-top refers to content delivered via the internet. It usually applies to companies like Google and Skype which offer services similar to telcos but are not telcos.

Consumer lobbying group BEUC said EU countries should stop dragging their feet.

“This law would be a much needed upgrade of current rules to safeguard consumers’ privacy when they go on the internet or use mobile apps as well as protect the confidentiality of their online communication,” BEUC spokesman Johannes Kleis said.

Reporting by Foo Yun Chee; Editing by Adrian Croft

Google unveils new Pixel phone, adds tablet in Apple challenge

SAN FRANCISCO (Reuters) – Alphabet Inc’s Google unveiled on Tuesday the third edition of its Pixel smartphone, a Google Home smart speaker with a display and its first tablet computer, as it makes a come-from-behind push into hardware.

The Google Pixel 3 third generation smartphones are seen on display after a news conference in Manhattan, New York, U.S., October 9, 2018. REUTERS/Shannon Stapleton

The company’s Android software has gone from being an also-ran to the brains of most of the world’s smartphones, and Google topped Inc in smart speaker sales in recent quarters.

Pixel phones, though, have been a tougher sell, garnering less than 1 percent of the global market by shipments in Google’s first two years of trying, according to research firm Strategy Analytics, and launching with glitches.

The Pixel 3, priced at $799, and larger sibling Pixel 3 XL, priced at $899, mark Google’s latest entries into a phone lineup it hopes will someday be as popular as Apple Inc’s iPhone.

The new Pixel Slate tablet runs Google’s beefier Chrome OS laptop operating system rather than Android and is priced at $599, putting it in competition with Apple’s iPad Pro tablet series.

Shares of Alphabet barely moved on the release. Financial analysts said it is difficult evaluate Google’s hardware business as it is overshadowed by profits from search ads.

Google branched into hardware three years ago so that, like Apple, it could have full control of the performance of its applications and the revenue they generate. Other phone makers sometimes crowd out Google’s apps with their own or take a share of ad revenue.

The Google Pixel 3 third generation smartphones are seen on display after a news conference in Manhattan, New York, U.S., October 9, 2018. REUTERS/Shannon Stapleton

Expanding geographic distribution is likely to boost Pixel’s fortunes. The Pixel 3 will launch in 10 countries, up from six for the Pixel 2 a year ago. New additions include France, Ireland, Japan and Taiwan.

Also helpful could be a new artificial intelligence tool sure to generate buzz among consumers. The software, launching in the United States only, answers phone calls, requests information about the nature of the call and shares it as text with the recipient.

“We’ve built the first phone that can answer the phone,” Rick Osterloh, Google’s senior vice president for hardware, told media on Tuesday.

Google shipped 2.53 million Pixel 2 and 2 XL devices through the nine months ended June 30, Strategy Analytics said. The first Pixel devices hit 2.4 million shipments in the nine months ended June 30, 2017, the firm said.

Limited adoption has reflected Google’s hesitancy to go as wide and big in distributing and marketing the Pixel as Apple, which launched its last two iPhone line-ups in about 50 countries.

Going from a small experiment to a polished product that works in various languages and is backed by large sales, support and technical teams has been part of Google’s challenge.

Last year’s Pixel 2 arrived with bugs that prompted user complaints about unwanted noises during calls, a crashing camera app and an unexpected screen tint. Google doubled warranties to two years in response.

Google Assistant, the signature virtual helper feature on the Pixel, was available in six languages a year ago and now supports 16.

Slideshow (12 Images)

In turn, Google hosted 10 unveiling events across the world on Tuesday, including in New York, London, Paris, Tokyo and Singapore, spokesman Kay Oberbeck said.

Still, the Pixel 3 could see limited uptake in the United States as Google again signed an exclusive distribution deal with wireless carrier Verizon Communications Inc that means the device will get little marketing from other carriers.

Google said it would augment distribution by opening on Oct. 18 two temporary stores in popular neighborhoods of Chicago and New York and putting up displays at U.S. retailers B8ta and Goop.

Google’s new smart speaker, which has a display to show visual responses to voice commands, mostly matches offerings from Inc and Facebook Inc.

But unlike its competitors, Google said its Home Hub, priced at $149, does not have a video conferencing camera.

The nod to privacy concerns comes as Google and other big U.S. tech companies try to bounce back from recent data breach scandals.

Amazon shipped 21.5 million smart speakers, including those with displays, in the year ended June 30, compared with 18.3 million for Google, according to research firm Canalys.

Google said in a blog post on Tuesday that it recently delivered some Google Home speakers within 10 minutes of ordering using drones from Alphabet’s Project Wing.

Shares of speaker maker Sonos Inc were down 5.6 percent on Tuesday.

Reporting by Paresh Dave and Arjun Panchadar; Editing by Leslie Adler, Peter Henderson and Meredith Mazzilli

U.K. High Court Blocks Class Action Against Google Over User Privacy

A class action lawsuit brought against Google with the goal of collecting £3 billion ($3.5 billion) in compensation was blocked in the U.K. high court on Monday.

Google was accused of bypassing default iPhone privacy settings between August 2011 and February 2012, allowing the company to collect data from people in the U.K. who used the Safari browser, Wired reported. The lawsuit was brought to the High Court by a group called, Google You Owe Us led by former Which? director Richard Lloyd and represents more than 4 million iPhone users.

High court judge, Mr. Justice Warby in London announced the decision to block the lawsuit on Monday. In his ruling, he said, “it is arguable that Google’s alleged role in the collection, collation, and use of data obtained via the ‘Safari workaround’ was wrongful, and a breach of duty,” but added that Google did not cause damage to users, according to the Guardian.

The information collected by Google was allegedly used for its DoubleClick service, a tool that allows advertisers to use personal data to target people based on race, sexuality, political leanings, and social class, the Financial Times reported. The court in May heard in a hearing from Lloyd’s lawyers that the information was later used to create groups like “football lovers.” They added that Google also was able to collect information about a user’s financial situation, shopping habits, and geographical location.

Lloyd called the High Court’s decision “extremely disappointing” and added that it leaves people without any avenues for seeking justice. “Closing this route to redress puts consumers in the UK at risk and sends a signal to the world’s largest tech companies that they can continue to get away with treating our information irresponsibly,” Lloyd said in a statement reported by the Guardian.

Google, on the other hand, dismissed the claims. “The privacy and security of our users is extremely important to us. This claim is without merit, and we’re pleased the court has dismissed it.”

Google+ Will Shut Down After Security Breach Exposed User Data to Outside Developers, Report Says

A Google+ security breach gave outside developers access to the private data of hundreds of thousands of the social network’s users between 2015 and March 2018, according to a Wall Street Journal report. Google neglected to report the breach to the public, allegedly out of fear that the company would face regulations and damage to its reputation, according to sources and documents obtained by the Wall Street Journal.

In a memo cited by the Wall Street Journal, Google’s legal and policy staff warned against disclosing the breach, fearing it would draw comparisons to Facebook’s mishandling of user data, when more than 50 million Facebook users had their personal information leaked to the data firm Cambridge Analytica.

The information exposed in the Google+ data breach included full names, email addresses, birth dates, gender, profile photos, places lived, occupation, and relationship status.

Google has recently been at the center of a number of privacy breaches. The company was the target of a massive class action lawsuit in the U.K. after 4 million users had their personal data collected and allegedly used for targeted advertising. The lawsuit was blocked in the High Court on Monday.

The Google+ data breach was discovered in March of this year during an audit of the company’s APIs, conducted by a privacy task force codenamed Project Strobe. A bug in the API could have allowed outside developers to access the data of 496,951 users who had only opted to share their private profile data with friends.

Google is expected to announce the breach on Monday, as well as its plans shut down Google+, according to the Wall Street Journal.

How To Protect Your Portfolio In A Bear Market

Economic uncertainty in emerging markets and steeply rising interest rates in the U.S. created plenty of concerns among global investors in the past week. Nobody knows for certain how these factors will affect stock markets in the coming days, but the fact remains that the stock market is inherently volatile and unpredictable.

A bear market is coming sooner or later, that’s just the way financial markets work, and investors need to be prepared for all kinds of scenarios.

Even the smartest professionals with massive amounts of intellectual and financial resources fail miserably in their attempts to forecast bull and bear markets.

As opposed to making market predictions, relying on objectively quantified variables with a solid track record of performance is a far sounder approach to protecting your capital through the ups and downs in the markets.

The following paragraphs will introduce 3 different quantitative systems based on trend following, earnings expectations, and relative strength. None of these systems is perfect or infallible, but the evidence shows that they can be remarkably effective at providing market protection through all kinds of environments.

Importantly, these systems are entirely rules-based, and they don’t involve any kind of market forecast or prediction whatsoever. The main idea is that you can reduce your downside risk in bear markets by relying on cold-hard data and observable indicators.

The Trend Is Your Friend

One of the most popular sayings in the market is “the trend is your friend”. Even if that is a cliché, that doesn’t make it any less true. There is plenty of statistical evidence proving that investors can optimize the risk vs. return equation in their portfolio and avoid big drawdowns by following the main trends in asset prices.

The following system is remarkably simple, yet effective. The market is considered to be in an uptrend if the slope in the 200-day moving average in the SPDR S&P 500 (SPY) is positive in the past 10 days. Conversely, if the slope in the 200-day moving average is negative in the index-tracking ETF over the past 10 days, then markets are considered to be in a downtrend.

The system only buys the SPDR S&P 500 when it’s in an uptrend, and it remains in cash when the ETF is in a downtrend. The system makes any buy or sell decisions every 4 weeks, so it doesn’t require a lot of work, and trading expenses should be negligible.

how to protect your portfolio in a bear market

Data from S&P Global via Portfolio123

Since January of 1999, this system produced an annual return of 8.55% versus an annual return of 6.35% for a buy and hold strategy in the market-tracking ETF over the same period.

In other words, a $100,000 position invested in the trend following system in January of 1999 would have a current market value of $505,100 and the same amount of capital allocated to a buy and hold position in the SPDR S&P 500 would be worth $337,500.

Even more important, the maximum drawdown for the trend following system was 20.57% during the backest period versus a much larger drawdown of 55.42% for the buy and hold strategy.

The backest is indicating that this remarkably simple trend following system produces both higher returns and much smaller downside risk than a buy and hold strategy in the SPDR S&P 500 ETF.

Market Timing Based On Earnings Expectations

There is an almost infinite amount of fundamental variables to consider when making investment decisions, but earnings are clearly one of the most important return drivers for stocks. At the end of the day, a stock is simply a share in an ownership of a business, so earnings have a huge impact on stock prices.

This system basically buys and sells the SPDR S&P 500 Trust ETF based on earnings estimates for companies in the S&P 500 index. Since earnings estimates can be quite volatile, the system uses moving averages in earnings expectations to smooth the data and evaluate the main trends in those estimates.

Specifically: When the 5-day moving average of earnings estimates is above the 20-day moving average, meaning that earnings estimates are on the rise, the system is fully invested in the SPDR S&P 500 Trust ETF. On the other hand, when the 5-day moving average is below the 20-day moving average in earnings estimates, the system is completely allocated to cash.

Data from S&P Global via Portfolio123

Since January of 1999, this system gained nearly 429.75%, while the buy and hold strategy in the ETF gained a much smaller 237.52%. Even better, the maximum drawdown for the system was around 25.44% during the backtest period, while a buy and hold strategy in the SPDR S&P 500 Trust ETF had a maximum drawdown of 55.42%.

Earnings expectations have a big impact on stock prices, and the data indicates that investors have a lot to win in terms of increasing returns and reducing drawdowns by incorporating earnings expectations into their toolbox for investing decisions.

Asset Class Rotation Based On Relative Strength

Trend following is about evaluating the main price trends in an asset, so you are looking at the current price versus previous price levels for such asset in particular.

On the other hand, relative strength is about comparing different asset classes. Even if both stocks and bonds are in uptrends, we can compare the two asset classes in terms of their risk-adjusted returns to evaluate which one has superior relative strength.

Combining trend following and relative strength means investing only in assets that are rising in price over the long term, and also picking only the strongest names among the ones that are rising in price.

The following system rotates between 9 ETFs that represent some key asset classes.

  • SPDR S&P 500 for big stocks in the U.S.
  • iShares Russell 2000 ETF (IWM) for small U.S. stocks.
  • iShares MSCI EAFE ETF (EFA) for international stocks in developed markets.
  • iShares MSCI Emerging Markets ETF (EEM) for international stocks in emerging markets.
  • Invesco DB Commodity Index Tracking ETF (DBC) for a basket of commodities.
  • SPDR Gold Trust ETF (GLD) for gold.
  • Vanguard Real Estate ETF (VNQ) for REITs.
  • iShares 20+ Year Treasury Bond ETF (TLT) for long-term Treasury bonds.
  • iShares 1-3 Year Treasury Bond ETF (SHY) for short-term Treasury bonds.

In order to be eligible, an ETF has to be in an uptrend, meaning that the current market price is above the 10-month moving average. If no ETF is in an uptrend, the system goes for the safest asset in the group, which is the iShares 1-3 Year Treasury Bond ETF.

Among the ETFs that are in an uptrend, the system buys the top 3 with the highest relative strength. Relative strength is measured by a ranking system that considers volatility-adjusted returns over 3 and 6 months.

Since 2007 the system gained a cumulative 325.2%, more than double the 136.5% generated by a buy and hold strategy in SPDR S&P 500. The maximum drawdown for the system is around 14% versus more than 55% for a buy and hold position in the ETF that tracks the S&P 500 index.

Source: ETFreplay.

These three systems show how different quantitative methods can provide downside protection in a bear market without making any kind of market prediction or speculation whatsoever.

Even if you don’t replicate these kinds of systems, the information that these systems provide can be enormously valuable at analyzing market conditions and adjusting your portfolio risk level accordingly. At the end of the day, information is power, and the information provided by these kinds of quantitative systems can make a big impact on your capital over the long term.

Disclosure: I/we have no positions in any stocks mentioned, and no plans to initiate any positions within the next 72 hours.

I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.

15 Columbus Day Sales on Tech We Love: Instant Pot, Apple, Vizio, Amazon Echo

Whether you observe Columbus Day or Indigenous People’s Day, this is a weekend with a few extra tech and gaming sales. We’ve highlighted some discounts for tech we really like. We’ve tested many of these items, and the rest look like they’re well worth your hard-earned money.

New this Week

  • New Fire TV Stick 4K for $50 (Was $70). We haven’t yet tested the latest Fire TV, but it doesn’t appear to pack a lot of surprises. It should still work well for Amazon content as well as other streaming apps. This model has a voice remote for Alexa and is significantly cheaper (and trimmer) than last year’s 4K Fire TV. It will likely show up in our Best TV Streaming Devices sometime soon.

  • Super Mario Party for $60. The latest Mario Party title is getting good reviews, and has tons minigames that are as unique and fun as the Switch itself. It’s a terrific way to kill time with some friends.

  • 6 Months of Kindle Unlimited for $30 (Was $60). Kindle Unlimited gives you access to more than a million books on the Kindle app or a Kindle ebook reader at no extra charge, with some magazines thrown in, too. This discount lasts until Oct. 20.

  • Marvel Thanos Gauntlet Mood Lamp for $40. It won’t give you the ability to erase half of the sentient beings in the universe, but this Infinity Gauntlet lamp is colorful and a lot of fun.

Apple Discounts

Best Buy has a big Apple Shopping Event going on. Not everything is actually on sale, but here are three picks that might be of interest.

Tech Deals

Columbus Day Sales pages

There are plenty more TVs and PC deals available through Columbus Day, and some appliances and tools on discount. Below are some major retailer sales pages, if you’re in the mood to peruse.

When you buy something using the retail links in our stories, we may earn a small affiliate commission. Read more about how this works.

Tesla's Achilles' Heel?


Tesla’s (NASDAQ:TSLA) drive to sustainable profitability has passed through the “production hell” phase and is now in the “delivery hell” phase. Incumbent automakers have a much simpler delivery problem because they need to ship produced cars to a variety of local dealerships rather than a multitude of home addresses. This article will examine dealer networks, sales per dealer, and compare the delivery logistics of selling to a few hundred to a few thousand dealers versus the direct sales model that Tesla is using. We will also examine additional implications on sales to non-early adopters, as well as the future potential for “service hell” (once there are several hundred thousand cars in the hands of consumers).

Dealer Networks

The US is covered by close to 40,000 auto franchise establishments (see Figure 1).

Figure 1: Number of Auto Franchise Establishments in the US

(Source: Franchise industry: automotive establishments U.S. 2018 | Statista)

These include dealers, repair shops, and parts shops. Many of these are multi-make shops, in that they sell, repair or service more than one make of car. Some services, like tire repair/replacement, body work, glass repair, battery replacement, etc., can be done without regard to the make of the car, while some others may require parts to be ordered from manufacturers or auto parts suppliers. If we focus on new car dealerships only, this number is known to be roughly 17,000 (Source: Fed Paper)

Figure 2 shows the density of auto dealerships around large and mid-sized cities in the US. In addition, many smaller cities and towns also have auto dealers.

Figure 2: Auto Dealership Density in the US

(Source: Map Link)

This geographical dispersion of dealerships serves two important functions for consumers:

1. Sales – Most consumers will not buy a car without a test drive and some amount of comparison shopping (possibly test-driving multiple makes and models). Auto dealerships are where you go for a test drive. In particular, if you live in a smaller city or town, the presence of a multi-make dealer would allow you to test drive a Volvo, Audi, VW or Subaru all by just walking a few hundred feet from one showroom to another within the same larger dealer facility.

2. Service – The presence of a diffuse dealer network also means that every city and many towns have a location where customers can go for in-warranty repairs, or after the warranty period if a repair requiring consultation with the manufacturer or parts supplier is needed.

On the auto producer side, this dealer network has advantages as well.

1. Inventory Management – Each dealer manages its local inventory. The car is sold (and booked) when it is delivered to the dealer. Costs of land, buildings, local advertising and inventory depreciation are passed onto the dealer network in exchange for the differential between price to dealer and retail price.

2. Delivery Logistics – Auto producers need to get their cars to a relatively smaller number of locations which have specialized on the local/consumer interaction side, including registration, sales tax payment, contract signature, signing of loan documents and insurance. This is considerably simpler than delivering cars to customers’ doors, and much cheaper.

3. Sales Exposure – As mentioned on the customer side, the presence of multi-make dealers allows a producer’s cars to be exposed to a much wider set of towns and smaller cities than having exclusive showrooms confined to a few large cities (due to economies of scale on the dealer side) would.

4. Service – Once cars are sold, dealers are only on the financial hook for warranty repairs. Other than that, they are completely out of the service business, and this keeps their organizations much simpler than having direct sales and service staff would. As with the sales side, service is also a multi-make function: if one day service staff have fewer Audis to repair, they can be moved over to Volvo. An experienced technician can easily work on 5-10 different makes and any model from that make.

Capital Requirements

Figure 3 shows that on average, dealers keep about 70 days of sales inventory.

Figure 3: Inventory Kept by Auto Dealers in Days Supply Units

Figure 4: Vehicles Sold Annually per Dealership

(Source: Fed Auto Inventory Paper)

Figure 4 shows that the average dealer is selling about 1000 units per year; combining 3 and 4, we can back out that they are keeping on average 192 cars in inventory, which is roughly $4.8 million of inventory. Adding in the average cost of land, buildings, working capital requirements for sales and service employees, we get a conservative estimate of $7 million per dealer. Therefore, across 17,000 dealers, the US dealer network is saving the auto industry the use of at least $119 billion in working (and other) capital in exchange for the dealer/retail price differential (essentially in perpetuity, subject to renewals). Indeed, the concept of a dealer network was invented by none other than Henry Ford in 1903 as he was trying to build up his car production and running into capital constraints (Source: Henry Ford Org).

The margin that the average dealer earns on new car sales is 1-2% (pre-tax profit margin), whereas service margins and used car sales are reported to be much better; the average dealer gets 44% of its profits from service and 26% from used car margins, with the balance coming from sales of financial products associated with new vehicle sales, such as extended warranties, lending products, etc. (Source: Edmunds).

So, in summary, almost all car producers have outsourced the sales and service function of their vehicles in exchange for the loss of the retail/dealer 1-2% margin. They incur none of the service costs nor revenue, and do not deal with the sale of affiliated financial products such as extended warranties or loans.

How About Tesla?

Figure 5 shows the locations of Tesla showrooms and service centers (red dots) in the southwestern US (for the full map, which does not show well in this article, please go to the source map using the link). Their geographic reach is quite limited at the current time, and outside of CA and FL, many states, such as Nevada and Arizona, have a single sales center (the emptier states have none). Even in a large-population state like NY, there is not a service center outside the NYC metro area.

Figure 5: Tesla Sales & Service Centers

(Source: Find Us | Tesla)

While the ramp-up of Tesla sales to early adopters did not present a challenge (especially given the percentage of sales in CA) using this distribution model, the “delivery hell” that the company is now stuck in may be a function of not having leveraged the existing capital of the US dealership network in pursuit of a different business model.

While company data does not allow us to definitively back this out, it is highly probable, in my opinion, that the distribution, sales and service model Tesla has chosen constrain the size at which SG&A margin becomes sufficiently positive for the company as a whole to remain cash flow-positive and profitable. We reason as follows: instead of renting the existing infrastructure of the US dealership network (which could probably accommodate Tesla sales and service with small marginal capital expenditures, funded out of existing positive dealer cash flow), Tesla is in the process of trying to build a more efficient version of the same infrastructure (a capital-intensive process) in addition to the extensive capital required to build a new auto manufacturing process. Other auto car companies are relying on all their dealers to obtain loans from their local banks to carry inventory and build the retail side of the business. Tesla is trying to do both.

As we can see from comparing the map in Figure 5 to that in Figure 2, it is perhaps 25% of the way to having a sufficiently dense network of sales and service centers in order to become more than a small niche player selling to early adopters in top metro areas. Given the amount of capital (at least $120 billion) embedded in this network, assuming a 10% market share, this means Tesla would need to spend $12 billion in total on the sales/service infrastructure (not accounting for any hypothetical long-term differences in service costs of EVs versus legacy vehicles, for none of which data exists in reliable numbers) versus having let the existing dealer network finance these parts of the retail side of the business.

Investment Implications

Tesla has made it through production hell and reached a level of production approaching 5,000 cars weekly (Source: Company news release). Fundamental questions that remain include Q3 margins and whether Tesla can begin to produce lower-cost (35k) Model 3s at positive margins prior to demand for the higher-end (46k-75k) Model 3s all having been met. Additionally, even delivering all the higher-priced units requires them to manage through delivery hell (a shortage of car carriers is not the issue, not having infrastructure is). Furthermore, upcoming debt maturities require those high-end car margins to be sufficiently profitable (or an equity raise will be required). As Figures 7 and 8 show, neither annual nor quarterly margins (whether Gross, EBITDA, Operating or Pretax) show any obvious pattern thus far as a function of number of units produced.

Figure 7: Annual Margin Measures

Figure 8: Quarterly Margin Measures

(Source: Bloomberg)

I believe the dealer and service center network question has a direct bearing on how likely Tesla is to get to the point of 35k car profitability. We do not see a sufficient reduction in costs coming from materials (steel and aluminum), labor (minimal) or improvements in batteries (they eventually will fall, but this requires technological breakthroughs that take years, not months); in fact, the likely operational leverage would come from not needing to continue increasing spending on replicating dealer infrastructure via direct selling sites and service centers (whether mobile or stationary). This is a problem that a car producer that leverages the capital already invested in the extensive national dealer network likely would not be having.

It appears that Tesla has gotten itself into the position of juggling multiple balls while standing on a paddle board; now, hammerhead sharks are beginning to circle the paddle board (competition). The question that needs to be asked: Is selling direct to consumers an innovation to a long-tested process that has been subjected to 100 years of evolution on the battlefield but somehow been missed by all the other players that have struggled with profitability in a capital-intensive, cyclical industry? Or, is it a strategic blunder of the first order that will be seen to be Tesla’s Achilles’ heel when the history of the company is written? Tesla investors would be well advised to ponder this question.

Disclosure: I am/we are short TSLA.

I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.

Additional disclosure: We have both long and short positions in both stock and options which change regularly but are net short on a long-term basis.

Zunum Aero’s Hybrid Plane Uses a Helicopter Engine Cut Fuel Use in Half

If you’ve flown a drone, you know that battery life is a problem. Be extra careful when flying over water or your kid’s birthday party, because you’ve got something like 20 minutes of flight time before the thing comes down. And you needn’t have learned that lesson the hard way to get nervous about the idea of battery-powered aircraft with people inside.

Yet going electric could make commercial aviation—a significant source of humanity’s greenhouse gas emissions—greener, as well as cheaper and quieter. It could open up routes to and from regional airports, a clean alternative to high speed inter-city trains. Those are the flights Zunum Aero hopes to make happen. The Kirkland, Washington-based startup is developing small electric planes that carry 10 to 50 passengers and could fly 700 miles between charging stops. Their trick is powering the plane’s motors with electricity that comes from a jet fuel-burning generator as well as onboard batteries, like a Chevy Volt that’s taken to the skies.

Today, Zunum is announcing that it has found the engine it needs to make that vision take off. The gas turbine is a modified version of the Ardiden 3Z engine made by Safran Helicopter Engines, here coupled to a generator that will deliver 500 kilowatts of electric power—enough for a couple of powerful motors. It’s a crucial step, since today’s batteries are far too big and heavy to make long-distance commercial flights even remotely possible.

Zunum will use a modified version of the Ardiden 3Z engine made by Safran Helicopter Engines, here coupled to a generator that will deliver 500 kilowatts of electric power—enough for a couple of powerful motors.


Even fulfilling a basic FAA safety requirement—that you be able to fly for 45 minutes longer than it takes to reach your destination—would be a problem with burning some sort of fuel. “That would need a prohibitive amount of battery right now,” says Zunum founder and CTO Matt Knapp. “Not to mention actually going somewhere.”

Zumum’s first aircraft, the ZA10, will be a sleek white machine with slender wings, two ducted fans mounted at the back, and room for up to a dozen passengers. But it’s starting with a more mundane looking flying test bed, a modified Rockwell Turbo Commander 840, a small plane with two, three-blade propellors and usually eight seats. Zunum will start by replacing the 840’s left engine with its own electric motor, sticking a bunch of batteries in the fuselage, and testing at altitude next summer. By the end of 2019, Knapp expects to install the generator and test the hybrid system. Last, it will replace the propellors with ducted fans (a shrouded propeller that can develop more thrust), to test the entire powertrain. If all that goes well, the team will put all the elements into that new plane, of its own design.

Knapp says that with the hybrid system, its plane will need half the fuel that a comparable conventional plane burns. Unlike the plug-in hybrid Volt, where the engine cuts in when the batteries run out, Zunum will flit between the two depending on the flight profile. The generator spins up for power-hungry takeoff, or maybe if the pilot’s fighting headwinds. For cruising, though, the batteries can do much of the work, before bringing the aircraft back to earth for a quiet, electric landing.

Zunum, which has financial backing from Boeing’s HorizonX venture arm, isn’t not alone in trying to fill the skies with e-planes. Airbus is working with jet engine maker Rolls-Royce and Siemens on a hybrid-electric flight demonstrator called the E Fan X. Siemens showed it can make the tech work way back in 2011. Israel’s Eviation showed off its “Alice Commuter” at the Paris Air Show last year, a fully electric, Tesla-style plane running off a 980-kWh battery pack—enough for 10 Teslas. NASA’s X-57 is an all-electric affair, with 12 small motors and propellors lining the wings. NASA’s always wants the lessons it learns in the X-plane program to trickle down into commercial aviation. But with the taxiway already full of companies lining up to launch on electrons, that may not take too long at all.

More Great WIRED Stories

Tech giants allied against proposed Australia law seeking encrypted data

SYDNEY (Reuters) – Four global tech giants – Facebook, Apple, Alphabet and Amazon – will oppose an Australian law that would require them to provide access to private encrypted data linked to suspected illegal activities, an industry lobby group said on Wednesday.

FILE PHOTO: Silhouettes of laptop users are seen next to a screen projection of Facebook logo in this picture illustration taken March 28, 2018. REUTERS/Dado Ruvic/Illustration/File Photo

Australia in August proposed fines of up to A$10 million ($7.2 million) for institutions and prison terms for individuals who do not comply with a court request to give authorities access to private data.

The government has said the proposed law is needed amid a heightened risk of terror attacks.

Seen as test case as other nations explore similar laws, Facebook Inc, Alphabet Inc, Apple Inc and Amazon will jointly lobby lawmakers to amend the bill ahead of a parliamentary vote expected in a few weeks.

Customers walk past an Apple logo inside of an Apple store at Grand Central Station in New York, U.S., August 1, 2018. REUTERS/Lucas Jackson

“Any kind of attempt by interception agencies, as they are called in the bill, to create tools to weaken encryption is a huge risk to our digital security,” said Lizzie O’Shea, a spokeswoman for the Alliance for a Safe and Secure Internet.

She said the four companies had confirmed their participation in the lobbying effort.

Representatives for the four firms did not immediately respond to requests for comment.

A spokeswoman for Australia’s home affairs minister, who is overseeing the legislation, did not immediately respond to a request for comment.

If the bill becomes law, Australia would be one of the first nations to impose broad access requirements on technology companies, though others are poised to follow.

FILE PHOTO: The logo of Amazon is pictured inside the company’s office in Bengaluru, India, April 20, 2018. REUTERS/Abhishek N. Chinnappa/File Photo

The so-called Five Eyes nations, which share intelligence, said last month they would demand access to encrypted emails, text messages and voice communications through legislation.

The Five Eyes intelligence network, comprised of the United States, Canada, Britain, Australia and New Zealand, have each repeatedly warned national security was at risk as authorities are unable to monitor communication of suspects.

Technology companies have strongly opposed efforts to create what they see as a back-door to user’s data, a stand-off that was propelled into the public arena by Apple’s refusal to unlock an iPhone used by an attacker in a 2015 shooting in California.

Frustrated by the deadlock, many countries are moving ahead with legislation, with New Zealand the latest to tighten oversight over access to online communication.

New Zealand said on Tuesday customs officers now have the authority to compel visitors to hand over passwords for their electronic devices. Tourists who refuse could face fines of NZ$5,000 ($3,292.00).

($1 = 1.3933 Australian dollars)

($1 = 1.5188 New Zealand dollars)

Reporting by Colin Packham; editing by Darren Schuettler

Mazda aims for all of its vehicles to be electric hybrid, EVs by 2030

TOKYO (Reuters) – Mazda Motor Corp (7261.T) said on Tuesday that all of the vehicles it produces by 2030 will incorporate electrification, while 5 percent of its cars will be all-battery electric vehicles (EVs).

FILE PHOTO: The logo of Mazda Motor Corp. is displayed at the company’s news conference venue in Tokyo, Japan May 11, 2018. REUTERS/Kim Kyung-Hoon

The Japanese automaker joins a growing number of global automakers who are planning to reduce emissions by producing more gasoline-hybrid vehicles, plug-in hybrids and battery EVs.

“By 2030, Mazda expects that internal combustion engines combined with some form of electrification will account for 95 percent of the vehicles it produces and battery electric vehicles will account for 5 percent,” the automaker said in a statement.

Mazda has said that it plans to market an all-battery EV in 2020. On Tuesday it said it would develop two battery EVs, one which will be powered solely by battery and another which will pair a battery with a range extender powered by the automaker’s rotary engine.

Reporting by Naomi Tajitsu; Editing by Muralikumar Anantharaman

Google CFO Ruth Porat: When It Comes to Data Privacy, ‘We Need to Constantly Raise the Bar on Ourselves’

Before Ruth Porat became the chief financial officer of both Google and its parent company Alphabet, she worked at Morgan Stanley. There, she—like many others in the financial industry—slogged through what became the 2008-2009 financial crisis, leading a team that advised the U.S. Department of Treasury on Fannie Mae and Freddie Mac and the New York Federal Reserve Bank on AIG.

The experience left an impression on her, she said Monday at Fortune’s Most Powerful Women Summit in Laguna Niguel, Calif. “One of the really important lessons for all industries that I took away from the work during the financial crisis—it applies to all industries in good times and bad—is: What are the unintended consequences of everything we’re doing, and how do we each stay ahead of those?” she said.

Porat, in conversation with Fortune’s Pattie Sellers before hundreds of top female executives, said those questions stuck with her as she joined one of the world’s largest and most valuable tech companies. Like peer media companies Facebook and Twitter, Google has come under fire recently for its data privacy policies and a business model based on the collection and dissemination of user data. (Google generates the lion’s share of its revenues from advertising dollars.)

It’s an unfair criticism, Porat seemed to suggest.

“Privacy’s been very important for Google since inception,” she said. The company was the first to say that users could take their personal data with them, for example. There are substantial privacy controls available to users. “‘Respect the user’ is a key mantra internally,” she added.

But that’s not enough. After the social media extremism on display over the last year, “The key lesson is, we need to constantly raise the bar on ourselves,” Porat said of Google and its peers. Some of the racism, sexism, and hate speech on display on today’s largest social networks can be difficult for the average person to discern; partnerships with, say, NGOs are needed to understand what a dog whistle is, for example.

“You need to identify your source of vulnerability and invest in it early,” Porat said. “For banking [during the ’08 financial crisis], it was liquidity.”

Or, put another way: Investing in the right foundation of data analytics systems allows you to “drive without mud on the windshield”—that is, faster and unencumbered, with a clear view.

Is there mud on the windshield at Google or Alphabet? Was there any when Porat joined the company in 2015? The CFO took a long pause after Sellers asked her the question, eventually finding the words. “You can always provide business leaders with greater data and clarity so they can make better business decisions,” she said. “They can stack-rank what’s most important to be focused on.” And think more carefully about tradeoffs, she added.

“I love data,” she said. “It gives you a fair picture.”

For more coverage of Fortune’s Most Powerful Women Summit, click here. And to subscribe to the Broadsheet or Data Sheet, Fortune’s newsletters about powerful women and technology, respectively, click here.

5 Essential Communication Strategies for Perfectionists

My company has invested in years of research developing a program that identifies six key personality types and their corresponding communication styles. One of the six main traits we identified is the “diligent” personality type. People who strongly identify with diligent personality traits are inclined toward perfectionism. By definition, diligent or perfectionist personality types are motivated by data and practical behaviors. They exude confidence in an evidence-based framework and are laser-focused on facts and achieving outcomes, preferring to get to the point of any conversation quickly. These particular skills are often advantageous to any organization when balanced with a healthy component of emotional intelligence. Another upside to this type of particularized communication is that people can count on diligent personalities to be objective and efficient in their interactions.

Problems arise when you are locked into a diligent state. Every problem must be solved, every rock must not go unturned, and every issue must have closure. Perfectionists are also inclined to persuade, over-explain, and offer unsolicited advice. If you identify with these strong perfectionist habits, you may also tend to face challenges when communicating with others. If you can’t keep your intense feelings in check, it can leave others exhausted and tuned-out.

When regulated, perfectionist types are open to the needs of others rather than sticking with a “prove it to me” mentality of engagement. Consider the following tips to help loosen the reins of perfectionism and improve your communication. The ultimate payoff will be a more balanced and healthy relationship with others — as well as yourself.

1. Hold your tongue.

Practice mindful or silent listening. Refrain from providing “answers” to statements unless it is formed as an actual question posed toward you. Most people just want to be heard as they work through their feelings and thought process. They are not expecting you to provide an answer to their issue or problem. Honor their unspoken wishes. Try to adopt the mindset that not every conversation requires your input or an instant solution.

2. Don’t fill in the blanks.

When you are caught up in fact-based thinking, impatience can set in, especially when you believe that you already know what the other person is going to say. When this happens, you stop listening and appear disrespectful. It can cause frustration for all parties involved. There is a fine line between identifying the facts and coming across as a know-it-all.

3. Stop critiquing every idea.

Perfectionism can stifle creativity and innovation. Brainstorming is an effective exercise to overcome detail-oriented biases. Allow thoughts, feelings, and ideas to flow freely — without evaluating, critiquing or questioning. Just try it. There are many ways to get started, including writing down your thoughts and asking open-ended questions.

4. Know when to accept “good enough.”

Not all exchanges require a definitive outcome, and not all tasks must be completed to exhausting perfection. Issues arise when intensity takes over — and you cannot let go. Allow yourself the freedom to feel comfortable with unrestricted outcomes and a less than perfect finale. Remind yourself to “let go” and then ask, “Will this be important, in one week, one month or one year?” Accept that 100 percent closure isn’t always the other party’s priority in the first place.

5. Learn mindfulness meditation

Even the most self-proclaimed over-analytical thinkers can learn how to benefit from a mindfulness meditation practice. It takes commitment and patience to get started, but the results can be remarkably rewarding. Meditation can help free closed-mind and build self-awareness. Here is a shortlist of evidence-based facts that highlight many of the wellness benefits of mindfulness thinking. An excellent place to start is with this five-minute meditation that can be practiced anywhere — even at your desk.

SEC chairman says Tesla settlement in 'best interests' of shareholders

WASHINGTON (Reuters) – U.S. Securities and Exchange Commission chairman Jay Clayton said in a statement on Saturday that the agency’s settlement with carmaker Tesla was in the best interests of the U.S. markets and company shareholders.

FILE PHOTO: Jay Clayton, Chairman of the Securities and Exchange Commission, testifies at a Senate Banking hearing on Capitol Hill in Washington, U.S. September 26, 2017. REUTERS/Aaron P. Bernstein/File Photo

Earlier on Saturday, the agency said it had fined Musk and Tesla $20 million each and required Musk to step down as chairman to settle securities fraud charges over Aug. 7 tweets in which Musk said he was taking the company private.

“I…fully support the settlements agreed today and believe that the prompt resolution of this matter…is in the best interests of our markets and our investors, including the shareholders of Tesla,” Clayton said.

Reporting by Michelle Price; Editing by Alistair Bell

Facebook says big breach exposed 50 million accounts to full takeover

(Reuters) – Facebook Inc (FB.O) said on Friday that hackers stole digital login codes allowing them to take over nearly 50 million user accounts in its worst security breach ever given the unprecedented level of potential access, adding to what has been a difficult year for the company’s reputation.

Facebook, which has more than 2.2 billion monthly users, said it has yet to determine whether the attacker misused any accounts or stole private information. It also has not identified the attacker’s location or whether specific victims were targeted. Its initial review suggests the attack was broad in nature.

Chief Executive Mark Zuckerberg described the incident as “really serious” in a conference call with reporters. His account was affected along with that of Chief Operating Officer Sheryl Sandberg, a spokeswoman said.

Shares in Facebook fell 2.6 percent on Friday, weighing on major Wall Street stock indexes.

Facebook made headlines earlier this year after profile details from 87 million users was improperly accessed by political data firm Cambridge Analytica. The disclosure has prompted government inquiries into the company’s privacy practices across the world, and fueled a “#deleteFacebook” social movement among consumers.

U.S. lawmakers said on Friday that the hack may boost calls for data privacy legislation.

“This is another sobering indicator that Congress needs to step up and take action to protect the privacy and security of social media users,” Democratic U.S. Senator Mark Warner said in a statement.

Federal Trade Commission Commissioner Rohit Chopra on Twitter said “I want answers” with a link to a Reuters story on the breach.


Facebook’s latest vulnerability had existed since July 2017, but the company first identified it on Tuesday after spotting a “fairly large” increase in use of its “view as” privacy feature on Sept. 16, executives said.

“View as” allows users to verify their privacy settings by seeing what their own profile looks like to someone else. The flaw inadvertently gave the devices of “view as” users the wrong digital code, which, like a browser cookie, keeps users signed in to a service across multiple visits.

That code could allow the person using “view as” to post and browse from someone else’s Facebook account, potentially exposing private messages, photos and posts. The attacker also could have gained full access to victims’ accounts on any third-party app or website where they had logged in with Facebook credentials.

“The implications of this are huge,” Justin Fier, director of cyber intelligence at security company Darktrace, told Reuters.

Guy Rosen, the Facebook vice president overseeing security, said the flaw was “complex” in that it resulted from three failings.

A video upload feature should not have displayed on a user’s profile page when accessed through “view as,” Rosen told reporters on a conference call. That alone would not have been problematic except that the video feature wrongly triggered the placement of the powerful login code. And it placed the code not for the “view as” user, but for who they were pretending to be.

Facebook fixed the issue on Thursday. It also notified the U.S. Federal Bureau of Investigation, Department of Homeland Security, Congressional aides and the Data Protection Commission in Ireland, where the company has European headquarters.

The Irish authority expressed concern in a statement that Facebook has been “unable to clarify the nature of the breach and risk to users” and said it was pressing Facebook for answers.

Slideshow (2 Images)

Facebook reset the digital keys of the 50 million affected accounts, and as a precaution temporarily disabled “view as” and reset those keys for another 40 million that have been looked up through “view as” over the last year.

About 90 million people will have to log back into Facebook or any of their apps that use a Facebook login, the company said.

Two Facebook users sued the company over the breach in federal court in California on Friday.

More than 6,000 users complained about the breach on Zuckerberg’s Facebook page.

“I’m so scared now. All my activities are on Facebook,” Mohammad ZR Zia, a 25-year-old college student in Kuala Lumpur, Malaysia, who has been using the social media platform since 2009, told Reuters. His account was logged out earlier on Friday.

The level of concern expressed on Facebook was enough that the company’s automated system temporarily blocked sharing of some articles about the breach.

“Our security systems have detected that a lot of people are posting the same content, which could mean that it’s spam,” a message told users. Facebook later apologized for the misfire.

Facebook has suffered narrower breaches before.

In 2013, Facebook disclosed a software flaw that exposed 6 million users’ phone numbers and email addresses to unauthorized viewers for a year, while a technical glitch in 2008 revealed confidential birth-dates on 80 million Facebook users’ profiles.

Reporting by Munsif Vengattil and Arjun Panchadar in Bengaluru and Paresh Dave in San Francisco; Additional reporting by Christopher Bing, Jim Finkle and David Shepardson in Washington, D.C., Joseph Menn in San Francisco and Angela Moon in New York; Editing by Clive McKeef

The Facebook Security Meltdown Exposes Way More Sites Than Facebook

On Friday, Facebook revealed that it had suffered a security breach that impacted at least 50 million of its users, and possibly as many as 90 million. What it failed to mention initially, but revealed in a followup call Friday afternoon, is that the flaw affects more than just Facebook. If your account was impacted it means that a hacker could have accessed any account that you log into using Facebook.

That’s a lot of them. You can read a fuller accounting of the hack here, but essentially it combines three bugs relating to Facebook’s “View As” feature, which lets users see what their profiles look like when other people view them. A video upload tool—intended to enable “Happy Birthday” videos—would erroneously appear on the “View As” page, and provide the access token of whomever the hacker searched for.

Facebook initially responded by logging out both the 50 million people it knows were affected by the attack, and an additional 40 million who were looked up with the “View As” tool in the last year. It also hit pause on the “View As” feature. But the second revelation Friday indicates that the fallout may be far more widespread than initially indicated.

Beyond the impact on Facebook accounts themselves, the company confirmed that breach impacted Facebook’s implementation of Single Sign-On, the practice that lets you use one account to log into others. The idea is to use a trusted service—like Facebook Google, Twitter, and so on—to log into sites and services across the web, rather than create a unique profile for each one. That saves time, and ensures you’re logging in through an entity you trust. In this case, it also appears to have potentially made Facebook’s breach an internet-wide calamity, at least for those impacted.

“The access token enables someone to use the account as if they were the account holder themselves. This does mean they could access other third-party apps using Facebook login,” Guy Rosen, Facebook’s vice president of product, said in a call with reporters Friday. “Developers who used Facebook login will be able to detect those access tokens have been reset.”

It’s unclear how long those third-party sites will accept the stolen access tokens, or how difficult it would be for an attacker to use an access token to get into a third-party site.

Facebook separately says it has invalidated data access for third-party apps for the affected individuals, meaning if you’re one of the 90 million people potentially affected, you won’t be able to, say, share an image from Instagram over to Facebook without changing your password.

Meanwhile, Facebook has still not confirmed whether any third-party accounts were actually compromised, and still has not detailed exactly what type of data hackers could have gotten away with. (That they could gain full access to Facebook accounts gives at least a baseline: Anything and everything on your profile would have been exposed.) Facebook also declined to say exactly how long attackers took advantage of the vulnerability, which was introduced in July 2017. Fourteen months is a very large window to do potential damage.

As for how widespread the attack was, Rosen said the targeting appeared fairly broad. But New York Times reporter Mike Isaac noted that Facebook CEO Mark Zuckerberg and COO Sheryl Sandberg had their accounts compromised as part of the attack.

Facebook already faces legal challenges as a result of the disclosure; Facebook users Carla Echavarrai and Derrick Walker have filed a class action suit in California “It is shocking that after all the publicity surrounding Facebook’s handling of personal information in the wake of Cambridge Analytica and its promises to do better by its users that Facebook has yet again failed to protect consumers’ information from hackers,” said their attorney, John Yanchunis, in a statement.

The debacle also underscores broader concerns about Single Sign-On, which Friday turned into the ultimate object lesson in the inherent tradeoffs between security and convenience. “Single Sign-on schemes are great in the sense that the federal reserve cash vault in Atlanta is dramatically more secure than the safe at a local credit union,” says Kenn White, director of the Open Crypto Audit Project. “But the downside is if a Single Sign-on gets breached you’re hosed.”

Sticking with one more secure sign-in does make sense, especially for use on sites that don’t have the resources or inclination to invest heavily in security development. But just like you want your passwords to be unique so compromising one doesn’t expose them all, account diversity is also vital online no matter how ironclad a particular sign-in scheme is. “You don’t want a situation where there’s one breach and your entire online identity is gone,” White says.

It remains to be seen whether that’s the case for 50 million—or 90 million—Facebook users. “We’re just starting to work through the full scope of what we’ve seen here,” said Rosen. For those affected, it’s an excruciating wait.

More Great WIRED Stories

​Pulsar graduates to being an Apache top-level project

In Montreal at ApacheCon, the Apache Software Foundation (ASF) announced that Pulsar had graduated to being an Apache top-level project. This pub-sub messaging system boasts a flexible messaging model and an intuitive client application programming interface (API).

Pulsar is a highly scalable, low-latency messaging platform running on commodity hardware. It provides simple pub-sub and queue semantics over topics, lightweight compute framework, automatic cursor management for subscribers, and cross-datacenter replication. It was designed from day one to address gaps in other open-source messaging systems.

Also: Apache Flink: Does the world need another streaming engine?

The initial goal for Pulsar was to create a multi-tenant scalable messaging system that could serve as a unified platform for a wide set of demanding use cases. Since then, it’s the scope has been expanded to add lightweight compute and a connector frameworks. This enables users to process data and integrate with external systems from inside Pulsar. This makes it interesting for both real-time and big data applications.

Pulsar’s architecture separates the serving and storage layers by leveraging Apache BookKeeper as the persistent storage component, which has proven to be a key strong point. This two-layer architecture enables Pulsar to offer a simplified approach to the cluster operations. This allows operators to easily expand clusters and replace failed node and provides a much higher write and read availability.

Its other main features include:

  • Native support for multiple clusters with seamless geo-replication of messages across clusters.
  • Low publish and end-to-end latency.
  • Seamless scalability out to over a million topics.
  • Client API with bindings for Java, Python, and C++.

Does some of that sounds familiar? If you’re a programmer, it should. While it’s not a duplicate of Apache Kafka, which is usually used for building real-time data pipelines and streaming apps, sometimes Pulsar is better. As Jim Jagielski tweeted, “Apache Pulsar is not only more performant that Apache Kafka, but makes it super easy to increase partition size and/or duration. Been bitten by this a few times :).”

Also: A critical Apache Struts security flaw makes it ‘easy’ to hack companies

He’s not the only one who likes Pulsar as a drop-in Kafka replacement. InfoWorld just awarded Pulsar its 2018 Best of Open Source Software award for databases and data analytics. It wrote, “Pulsar offers the potential of faster throughput and lower latency in many situations, along with a compatible API that allows developers to switch from Kafka to Pulsar with relative ease.”

“Launching Pulsar at Yahoo in 2015, our goal has always been to make Pulsar widely used and well-integrated with other large-scale open source software,” said Joe Francis, Oath’s Director of Storage and Messaging. It looks like he’ll see his goal realized.

Related stories:

The Biggest Mistake Companies Make When They Go Digital

These days every company is a technology company. Even the stodgiest old-line industrial companies are embracing digital strategies to stay competitive. But that doesn’t mean they’re good at it.

Speaking at Fortune‘s Brainstorm Reinvent conference in Chicago, Aaron Levie, the co-founder, chairman, and CEO of online storage giant Box, explained how many firms misfired when they embraced digital four or five years ago.

“Many companies’ first forays into digitals didn’t work. They acquired companies and started labs or added ping-pong tables in hopes of being like Google,” said Levie, adding they nonetheless missed the point.

Levie’s fellow panelist Melanie Kalmar, who is the corporate vice president, chief information officer, and chief digital officer at 110-year-old Dow Chemical, shares Levie’s view. And both executives cited the same reason for the stumbles.

The crucial mistake, said Kalmar, is treating digital as something to add on to a company’s existing operations.

“A lot of ill-fated strategies occur when they say, ‘We’ll take the existing things we do and apply the Internet to it,’” added Levie.

In doing so, companies fail to harness the main promise of digital technology, which is to be more agile and to get closer to their customers.

The temptation to treat digital as an additional branch of their existing business—rather than as a new type of process—is understandable given the strategies many industrial companies used to become successful in the first place. That strategy involved owning lots of physical property and directing large numbers of employees in a command-and-control environment.

Today’s successful businesses, said Levie, rely on using small teams that interact directly with customers in order to iterate and constantly improve. He added that so much of the complexity in big companies’ operations stems from different silos hoarding information.

This doesn’t mean, of course, that every old guard industrial company is going to fade away. Indeed, Kalmar noted how Dow has reinvented itself multiple times in its history and is currently doing so again. Right now, she says, Dow is learning to be “agile at scale.”

Levie and Kalmar also reflected on the famous “software is eating the world” axiom coined by venture capitalist Marc Andreessen. Both agreed that the observation is prescient but that it has not meant, as some supposed, that the future belonged only to software startups. Instead, the winners are turning out to be the companies—including large incumbents—that deploy digital best.

Why You May Love an Amazon Alexa Microwave

Amazon is all-in on Alexa, and this week, it revealed a new set of voice-enabled products ranging from a a wall clock to a doodad that goes in your car. The star and symbol for this bold new wave of Alexa devices? The AmazonBasics Microwave.

At a glance, it looks identical to every other 700W microwave, but it has some new tricks. By touching the Alexa button on it, you can ping a nearby Echo speaker, which will let you tell the microwave what you want to cook. In demos, Amazon showed how you could ask to cook “one potato,” commanding the microwave to heat a potato like only a microwave can.

OK, OK, so asking Alexa to cook a potato doesn’t sound all that profound. Many Twitter users poked fun at the idea, and some publications have suggested it’s “unnecessary” or wondered if “we really need” a smart microwave.

Of course, the answer is no. But if Amazon gets it right, a voice-controlled microwave could bring this dated device into the 21st century.

Fixing the Microwave

Regular old microwaves still work as well as they did in the 1970s, when they first became a thing people put in their kitchens. That’s the problem. It’s an appliance that’s hardly changed in half a century.

Most households own a microwave oven, but sales peaked in the mid-2000s and haven’t grown since. In 2014, Quartz dug into what it saw as the slow death of the microwave oven, pinning the lack of growth on a lot of possible culprits, from healthy eating to toaster ovens. But a lack of innovation has also contributed.

Microwave oven interfaces are deceptively complex, full of annoying button combinations. If you have a modern microwave, it probably came with 10 power levels and a bunch of pre-programmed modes to defrost, heat from frozen, melt or soften items, and cook a variety of foods. These handy presets can make the cheese on a slice of pizza melt rather than go rubbery, or heat two cups of frozen vegetables just right.

Most microwaves already know how long to cook something based on food type and portion. Unfortunately, they’re really difficult to remember how to use. Sometimes there’s a chart behind the door; other times, you have to keep the user manual handy to fully operate your microwave.

Here’s an example: To heat frozen vegetables in my microwave, you have to press the “cook” button, wait, then press 5, wait, then press 2. There are more than 80 button combinations that you have to memorize to use it precisely. A lot of microwaves are like this. It’s no wonder that most frozen meals just say “heat on high for three minutes.”

Every microwave has different presets with different button combinations that do different things. It’s more difficult than memorizing attack combos in Street Fighter II. No one should have to remember all that.

If Amazon gets its new microwave right, it could really improve the experience. Instead of using those horrible button combos, we could begin to tell our microwave the gist of what we want it to do—”defrost two cups of frozen peas”—and let it do the heavy lifting. The company says that at launch, the microwave should be able to defrost several types of foods, like vegetables or chicken, by varying the microwave’s power level, as well as adjust the cook time. It could mean a lot fewer undercooked potatoes and far less exploding tomato sauce in our future.

Standard microwaves can’t learn new tricks, but Alexa can. Amazon could continually refine the software with new foods, meaning a voice-connected microwave may actually get better over time. It’ll be no time before Google introduces one of its own.

Better Nuking Ahead

Of course, Amazon’s microwave may not live up to its potential. We weren’t all that impressed with GE’s Smart Countertop Microwave, which also comes with Alexa compatibility. In that device, Alexa doesn’t actually vary the power level or do all that much.

And talking to the microwave isn’t always convenient. It takes less time to press the “add 30 seconds” button than to press the Alexa button, then ask Alexa to add 30 seconds. You can command Alexa to stop the microwave, but why would you do that when you could just push a button yourself? You have to open the door to get your food, after all.

Then there are the privacy pitfalls. Do we really want Amazon to keep a detailed log of all our microwave use? Overzealous data logging is a problem with almost every new connected device—and a microwave might not benefit us enough to make the privacy tradeoff worth it.

Amazon hopes the extremely low price will ward off those concerns. At $60, the AmazonBasics Microwave is nearly half the price of some competitors. That alone will convince some people to try it.

Microwaves are imperfect tech, and voice control alone can’t make your frozen dinner taste better. But there’s a good chance you’re not making use of the helpful presets already built into yours. If Alexa succeeds, and I can forget how long it takes to cook a potato or the mind-numbing button combination I need to defrost veggies in the microwave, count me in.

The Stubborn Bike Commuter Gap Between American Cities

Cycle commuting is hot.

Warm, at least.

Depending on where you’re living. Each year, the League of American Bicyclists, a nationwide cycling advocacy organization, takes a look at the annual commuting numbers out of the American Community Survey. The ACS is a product from the US Census Bureau, and if you’re a cycling advocate, it asks one particularly helpful question every year: “How did this person usually get to work last week?” The League of American Bicyclists took last year’s respondents’ answers to these questions—as they have for the past five years—and broke them out by city to answer another helpful question: Where is American cycling growing?

Some quick caveats. The ACS data doesn’t capture the number of folks who are cycling for fun or to run errands. (Note: the number of bike-share trips were up dramatically last year.) People who cycle to a bus or train station might only report the public transit leg of their commute. The data might not take into account those who cycle to work one or two times a week, instead of every day. And because it limits respondents’ answers to a single week, it might not capture people who cycle seasonally, strategically avoiding a bicycle commute at the sweaty height of summer or frozen depths of winter. (The Census Bureau solicits survey responses from about 3.5 million Americans throughout the year.)

All that said: In 2017, according the ACS, the share of commuters cycling to work actually dipped by 4.7 percent compared to the year previous. Less than one percent of American commuters regularly use their bicycles to get to work. But 84 percent of the seventy largest cities in the US have seen an upward cycle commute trend over the past twelve years.

The most interesting trend in these numbers—and certainly not a new one—is the uncovering of a profound cycle commuting gap. In the five US cities with the highest share of cycle commuters (Davis, Santa Cruz, and Palo Alto, California, plus Boulder, Colorado, and Somerville, Massachusetts), an average 11.7 percent took bicycles to work last year. But in the next five (Cambridge, Massachusetts, Berkeley, California, Miami Beach, Florida, Portland, Oregon, and Ames, Iowa), just 7 percent cycle commutes. Take cities 20 to 25 (Redwood and San Francisco, California, Bloomington, Indiana, Portland, Maine, and Salt Lake City), and just 3.1 percent of those cities take bikes to work. You’re either a cycling city, one that opens its arms wide to welcome the two-wheels—or hardly one at alll.

“I shouldn’t be surprised, but I’m always a little bit surprised by the difference between the regions and just how far ahead western cities tend to be compared to every other region,” says Ken McLeod, the League of American Bicyclists’ policy director, who wrote the report. In the West’s top 20 cycling cities, an average 5.9 percent of commuters cycle to work. But just 2.2 percent of workers pedal to the office in the Midwest’s top twenty cities. It’s 2.1 percent in the South. Maybe most surprising of all: the American East, known for its dense, urban places that should be hospitable to cycling, just 2.5 percent of those in the region’s top 20 cycling cities actually cycle to work.

The chasm seems to be a function of city investment. “In most, if not all places that have have sustained increases in bicycling commuting, there have been investments in bicycle infrastructure—roadways that account for people on bikes and people walking,” McLeod says. “Those places have tried to reduce speeds and make driving safer, too, so people feel safer while biking.”

In Washington, DC, for example, where cycle commuting grew more than doubled between 2006 and 2017, the city has added about 80 miles of bike lanes since the turn of the century. It wants to build at least 50 miles more by 2020—and it wants most of those to be protected (i.e., more than a strip of paint). In fact, DC is the fastest-growing cycle commuting town in the country. Infrastructure works.

Of course, spreading the cycling revolution will take more than kindly asking cities to pretty please emulate DC or others with fast-growing cycle commuting populations, like Portland, Oregon, New Orleans, San Francisco, and Philadelphia. Cycling advocates say it’s a matter of making bicycle-friendly street design standards, well, standard, across many cities.

Some good news on that front: As Streetsblog first reported this week, the American Association of State Highway and Transportation Officials—the macher of American transportation design, which puts out highly influential engineering manuals used the country over—is revamping its bike guide. For the first time, the guide might include more cycling-safe infrastructure, like protected intersections and parking protected bike lanes. Engineering manuals may sound boring, but they’re how even understaffed cities can justify putting in different sorts of infrastructure. So they could be the key to getting more people cycling, everywhere. Way more than one percent.

More Great WIRED Stories

First North Carolina Got a Hurricane. Then a Pig Poop Flood. Now It’s a Coal Ash Crisis

After the storm comes the flood. Hurricane Florence poured 8 trillion gallons of rain onto North Carolina, and now the landscape between the Cape Fear River and the barrier islands of the Carolinas is a waterworld. Because ecological disasters happen in irony loops, that means long-recognized hazards have now become add-on catastrophes. First the floodwaters found thousands of literal cesspools containing the waste of 6 million hogs, and on Friday the waters reached a pool of toxic coal ash.

The water has breached the cooling lake at the LV Sutton natural gas plant on the Cape Fear River, forcing it to shut down. Also onsite are two coal ash basins, at least one of which—containing 400,000 cubic yards of the stuff, according to the owner of the facility, Duke Energy—may already be leaking coal ash into the River.

Coal ash is the irony part. Coal-fired power plants had to be located near the mountains that harbored the coal, and near the waterways that the power plants needed for coolant and water to boil to spin the turbines. “One of the consequences of burning coal is you get ash, and then you have to have something to do with it,” says Stan Meiburg, director of graduate studies in sustainability at Wake Forest University and a former EPA deputy administrator, both in DC and the Southeast. “The earliest practices were to put the ash right near the power plant.”

Coal use has been tailing off in the US, but as recently as 2011 the country was generating 130 million tons of coal combustion residue, or CCR, every year. More irony: Better air quality management technology captured more fly ash before it could leap out of smokestacks, raising the amount of CCR. Dry, the ash flies all over the place and can be a toxic inhalant. But get it wet, like mud, and it stays still and is easier to transport to landfills.

After the carbon in coal gets oxidized, what’s left is a list of metals that you hope are not present in jewelry: lead, mercury, selenium, arsenic, cadmium, chromium and a bunch of other bad actors. For decades people suspected that the gunk in the pools might leach into groundwater, or that a storm could breach the walls of a pool and the ash slurry would get into a river or lake. There were indications that they might cause problems—the fish and amphibians in the lakes and streams near coal ash ponds had reproductive problems, organ damage, higher metabolic rates indicating some kind of physiological stress. Metals accrued in the animals that ate them. In one particularly disturbing outcome, researchers found tadpoles with scoliosis and mouth deformations—they were missing not just teeth but whole rows of teeth.

Hilariously, none of the more than 1,000 coal ash ponds in the US were regulated in any way at all. And then in 2008, one of them broke open and poured a billion gallons of slurry all over eastern Tennessee. Meiburg says he recalls estimates that it would have cost the pond’s owner, the Tennessee Valley Authority, $50 million to remediate; it cost over $1 billion to dig the ash-mud out of the river bottoms.

In 2014 it happened again. Two stormwater drain pipes beneath a Duke Energy coal ash pond in North Carolina collapsed, spilling 39,000 tons of ash and 27 million gallons of slurry into the Dan River. North Carolina passed regulatory laws. The EPA got some regulations together. By 2015, there was at least a schedule for utilities to get their coal ash put into safer landfills. “What the public interest community called for was closure of the unlined, dangerous ponds. The 2015 rule from the Obama administration didn’t go that far,” says Lisa Evans, senior counsel for the environmental group Earthjustice. “It improved the situation immensely, but it didn’t get the job done.”

Irony again: One of the first things the EPA did under President Trump was re-weaken those coal ash regulations.

And irony yet again: The coal ash at the Sutton plant? “The basins are slated to be closed by the middle of next year,” says Paige Sheehan, a spokesperson for Duke Energy. “Some of the material was taken by train to a lined structural fill. The remainder is being moved to a new lined landfill on site.” But Duke knows the situation is dicey. Another coal-burning byproduct the company stores at Sutton, cenospheres—microscopic, hollow spheres made of silica and alumina sometimes recycled into concrete or other composite materials— “are flowing into the Cape Fear River,” she says. “We cannot rule out that coal ash might also be leaving the basin.”

LV Sutton isn’t the only plant that’s a potential problem. Another site, the closed Grainger Generating Station near the Waccamaw River in South Carolina, has 200,000 tons of coal ash within reach of rising floodwaters. Sheehan says Duke’s also watching pools at another plant called HF Lee, in Goldsboro. “This is like a natural experiment going on down there right now, because the possibility of more pool systems like this failing and releasing their waste, combined with all the other waste from pig farm operations?” says Christopher Rowe, a biologist at the University of Maryland Center for Environmental Science. It’s hard to wrap your head around.”

How complex? The immediate risk depends on the volume that actually gets released, and the next couple of days at the high water mark will determine that. Living things in the waters downstream can absorb that wide spectrum of heavy metals suspended as solids, to varying effects. But then those solids sink to the bottom.

But it still could be dangerous. “Even if the water in the river becomes very clear, and you can’t find any traces of contaminants,” says Avner Vengosh, a water quality and geochemistry researcher at Duke University. “The coal ash buried at the bottom slowly but surely releases contaminants into the ambient environment.”

The source is “pore water,” water mixed into the coal ash sediments in the top five inches or so of the riverbed. There’s no oxygen down there, so that dirt becomes the electrochemical opposite of oxidizing, what chemists call “reducing.” The heavy metals behave very differently, becoming more bioavailable to any critters at the bottom. “In an oxidizing form, it would tend to be absorbed into the sediment. In a reduced form it tends to be soluble in the water,” Vengosh says.

So you have to clean that mud out—a dangerous process in itself. At least 30 people who worked on cleaning up the 2008 spill are dead and, reports say, 200 more are sick; a lawsuit is ongoing.

And time is a factor, because climate change means hurricanes will, like Florence, be more intense and drop more rainfall, some of them right onto the Carolinas. There’s the final irony: A major contributor to the greenhouse gases that cause climate change were, of course, all those coal-fired power plants.

More Great WIRED Stories

Airbnb Just Revealed 3 Statistics That Will Change the Way You Lead

The surprising results

The Airbnb Plus survey had three key findings noteworthy for entrepreneurs and leaders:

  • People would rather have more comforts, such as super soft sheets, than an Internet connection. 59 percent in the U.S., 46 percent in Australia and 39 percent in Italy said air conditioning was the most important indoor amenity, beating WiFi and full kitchens.
  • Functionality is the highest valued amenity trait (43 percent), followed by thoughtfulness (e.g., leaving guests a bottle of wine) (29 percent).
  • Even though people will put the Internet aside, the “cool factor” matters to millennials (12 percent), with 58 percent saying social-media-worthy accommodations are a major factor when booking a stay.

Amber Cartwright, Global Design Lead for Airbnb Plus, translates the data and dissects what’s driving the findings.

“When traveling, people want to escape their everyday lives of emails and notifications and immerse themselves in a new place far from reality. Instead of connectivity, they prefer a comfortable place to call home with thoughtful touches that represent the local community, which are both amenities Airbnb Plus hosts provide.”

Cartwright also interprets the desire for shareworthy locations as more than just the desire to keep up with the Joneses.

“Though the shareability factor with friends and family is a motivation,” Cartwright says, “it’s no surprise that amenities that look incredible on social media–like infinity pools with a view or a kitchen fit for a chef–also make for an exceptional stay.”

In other words, it all ties back to the trend for an emphasis on memorable experience. Travelers blow up their Instagram feed with pictures of material stuff not to show off, but because the amenities affect the story of the trip, shaping what the travelers do and remember.

The big picture

So what can you take away from all this as people on your team start calling airlines and hotels?

  • For real vacations, leave. People. Alone. Workers are desperate to simplify and get away from responsibilities for a little while. Stop sending emails or asking them to get on your chat platform.
  • If employees are traveling for their jobs, they’ll appreciate you finding accommodations where they’re treated better than robots. Make the effort to find locations where people can feel welcome and at home, as that makes them happier and more relaxed so they actually can be productive for you. As Cartwright summarizes, “never underestimate the power of a personal touch”, whether that’s for your partners, employees or customers.
  • Don’t be surprised when team members tell you they’re going to locations that don’t immediately come to mind as vacation destinations. According to Cartwright, because people are willing to unplug, they’re increasingly booking stays in more remote places where they can fully reset. They’ll appreciate it if you do a little research to suggest some more far-flung possibilities to your team so they see what’s out there. Maybe you could even offer incentives or a contest for employees who go somewhere they’ve never been. While you don’t necessarily want to insist they do word-of-mouth advertising for you while they’re away, encourage them to make connections wherever they visit that could grow your business later on.

As you chew on this data, take to heart that the majority of people in the United States still struggle to use the vacation time given to them. Even though they want to get away and perhaps even recognize the mental and physical benefits of doing so, they still feel pressured to stay nose-to-the-grindstone 24/7. If you can model breaks yourself, if you can work mandatory time off into policy to show vacations are safe, do it. Use the information above not only to provide amazing, restorative trips, but to expand your company, too.

Published on: Sep 21, 2018

Cool Tech Isn't Just for Big Brands

Last week Betaworks hosted an event in New York called Future Tech for Brands. Four industry leaders shared their points of view largely oriented around building for the curve.  The dialogue included Suzana Apelbaum, Head of Creative at Google, Dan Bennett, Worldwide Chief Innovation Officer, Grey Group, Alex Magnin head of revenue at GIPHY and Richard Ting, R/GA’s global experience design lead.

The introductory remarks focused on the plague of synthetic media and fake news. This quickly led to reveal that Venmo is rapidly becoming the best, and most trusted social network.  While neural networks creating synthetic memes and deep fake videos might be a turn off, Venmo offers candor and transparency as users put their money where their mouth is.  But what about trends as far as how brands are engaging with tech today?

The first is around the rise of live formats like HQ and Twitch.  Arguably brands are still struggling to work out the best ways to integrate onto these channels, but is there a way for entrepreneurs​ to play in this space?  Suzana from Google placed her bets on assisted experiences and voice experiences, citing the success of Aiden, the chat bot created for Westworld and HBO. You can talk to Aidan via Google home from the comfort of your sofa.  From Aiden to the Johnny Walker guided tasting experience, voice will become the most natural way brands are communicating.  How are you leveraging it?

Giphy’s Alex Magnin celebrated the notion of searching for gifs and sending gifs to your friends as a force in the cultural zeitgeist.  With Giphy, this allows brands large and small to target based on sentiment and translates into gif search.  Incidentally, 70% of all giphy usage is through 1:1 messaging apps. On New Years the Facebook user community sent over 400 million GIFs.  Could this be a fun way to communicate with or respond to your customers?  After all, Giphy is a visual search engine reaching 300 million people per day.

Richard from RGA was emphatic about computer vision and chat bots.  Clarify, a computer vision company, incubated by RGA connects computer vision to objects in the real world such as sneakers, clothing, cars, etc…This is one of many firms making computer vision more accessible. Best-in-class chatbots recognized were, the rose bot, for Cosmopolitan Hotel in Vegas and, Erica, Bank of America’s chatbot.  Beyond computer vision, chatbots, and voice, Apply.AI was also mentioned. This company, in private beta, was designed to help developers of applicant tracking systems (ATS), human capital management software (HCM), and job boards manage their prospects and applicants using AI.

Dan from Grey was enthusiastic about ambient computing, and a world where everything has sensors in it.  The ambient offers an example of products and services in this space along with what to use them for.  While Dan’s mentions of near field radiation technology and 5th Wave computing may be irrelevant to your business, it might be time to consider how your company is using technology to engage both internally and externally.  

10 Reasons Why You Aren't Growing As a Leader (and Why You Should Still Try)

There’s no shortage of leadership content available. As you’re reading this article, millions of other people around the world are gaining knowledge on how to become better leaders via YouTube, blogs, audiobooks, and podcasts. But the alarming part is, the improvement to content consumed ratio is near non-existent.

If you’re a student of leadership, here are 10 reasons why you may not be getting better:  

1. You stop at “consume.”

Reading and listening to learn are fantastic, but if we don’t put what we take in into practice rather quickly that effort can be for not.  Think of it like a golfer who just hits a lot of golf balls on the driving range but can’t take their practice to the course. Find ways to quickly apply what you learn, no matter how small or insignificant it feels.

2. You view leadership as a title, not a journey.

Got a promotion into a management role? Hate to break it to you, but the title change is not going to make you a leader. Leadership is about action, not a position. View your development into a leader as a long-term journey instead of short-term accomplishment and you’ll earn your position over time.  

3. You’re not as good as you think you are.

Generally speaking, people in management roles aren’t the most self-aware people. In over 80 percent of the 360° Welder Leader Assessments we’ve administered, the leader rates themselves higher than their team rates them. Additionally, research shows 80 percent of people think they are better than average leaders.

Regardless of whether you think you are the best leader ever, self-awareness is a critical component to improving, and there’s no place better to gain the knowledge from than your team.  

4. You focus on words more than action.

I know we all love the famous movie speeches that motivate and inspire a group. One speech or one motivational conversation isn’t going to make a huge impact on your team. I have great news, especially if you’re not into public speaking. What will move the needle long-term are your actions and behaviors because that’s what people remember most. There is nothing more powerful in leadership than your example.

5. You think leaders are born not made.

The age-old debate about whether leaders are born or made has been settled.  Research by Leadership Quarterly found 24 percent of our leadership comes from DNA, while 76 percent is learned or developed. Whether or not you think you have the DNA, everyone has to work hard to develop the skills and anyone can become a better leader.

6. You are glued to your screen.

The choices for entertainment on our phones and television screens are endless. Consuming content at the expense of building real relationships with people is a big problem. Take a step back and ask yourself if you’re using your screen as an escape from reality instead of a way to connect with others. Also, consider what content you’re spending time-consuming. Many of today’s content creators or main characters in popular programs aren’t the best examples of leaders.

7. You default to thinking about results.

Everyone in business knows the results are important. In fact, without results, there are no jobs. But the best leaders focus on the process and the behaviors that produce the best results versus. solely focusing on the outcome. Direct your attention to doing the right things every day that produce the results and then the results will follow.

8. You’re constantly giving advice.

When you’re as smart and experienced as you are, it’s really tempting to offer advice at every turn. But your advice can actually hurt your people, especially if it’s doled out often. Instead of jumping to offering a solution or “advice,” stop and ask questions to better understand the situation. Oftentimes, a little bit of coaching can help an individual uncover the answer for themselves.

9. You’re too hard on yourself.

Leading other people is one of the hardest things you will ever do in your career.  No matter how good you get, you will never be immune to errors or making a wrong decision. There will be times when things don’t go according to plan, and that’s okay. It’s called life. Can come to terms with it and treat yourself to some grace.

10. You focus on the wrong things.

I know you like the sexy things that come with being a leader, but when you boil it down there are a few essentials:   

  • Understand the fundamentals by focusing on relationships built on trust

  • Get the foundation right by having a vision and core values

  • Simplify lives and improve performance by setting standards and holding people accountable

If you found yourself relating to this list, first, congratulations on being self-aware enough to admit your shortcomings.  Second, keep in mind one of my favorite Latin phrases, “nunc coepi” which means “today I begin.” Start fresh today and know leadership is a journey and not a destination.