Mar
08

Women Rock!

By · Comments (0)

On this International Women’s Day, we celebrate and honour all women as equal and powerful rocking world-changers! From boardrooms to living rooms, stages and more, women are always able to change the world and give us a hope for a better day, especially true during these uncertain times.

Women ROCK!

Categories : Uncategorized
Comments (0)

You can get stuck doing VDI the old way with the complexity of running and maintaining old platforms, but VxRail from VMware and Dell EMC gives you tomorrow’s hyper-converged VDI appliances without the trouble of traditional siloed applications that are so “yesterday”.

Unfortunately, VDI never stays static, needing patches and upgrades and maintenance all the time. But VxRail delivers one click nondestructive patches and upgrades, scaling from 80 to six hundred virtual desktops per appliance, with a maximum of nine thousand six hundred virtual desktops.
The Beatles were right about managing your VDI …. but only when you have VxRail can you sit back and “Let It Be.

VxRail brings a full suite of industry leading data service “friends” – replication, backup and recovery. But you also keep the buddies who already have. It’s built on VMware, so you manage VxRail through vCenter – VxRail lets you hang out with the friends you already know and brings new ones too.

You don’t have much money left after VDI’s normal power, infrastructure, data center floor space and high administrative and operational costs.
But VxRail reduces the data center footprint, saves on power and on infrastructure while minimizing Admin burdens and operational costs. You can predictably evolve your VDI in an easy, repeatable and flexible on demand modular way with VxRail.

It sure sounds like the Beatles had a VDI problem. You know, different vendors in the VDI stack pointing blame at each other.
They needed VxRail’s single point 24×7 global hardware and software support. There’s remote heartbeat monitoring, diagnostics and repair, with proactive fault detection going on all the time from Dell EMC’ single point accountability, world class service and support. VxRail brings an end to the blame-game finger pointing.

Dell EMC’s VxRail … that’s how you take your VDI song and “make it better” [ROFL]

Categories : Uncategorized
Comments (0)

If you think your WRX is fast then think again … screaming along at 1.3 PFLOPS Kyoto University’s new supercomputer is built on Dell EMC PowerEdge servers to progress research in the field of theoretical physics.

Yukawa Institute for Theoretical Physics’ director Sinya Aoki said the system will be used to “fuel our Institution’s studies on particle physics, nuclear physics, astrophysics, condensed matter physics, and quantum information physics, the new supercomputer will open up fresh research opportunities to further the frontiers of theoretical physics across Japan.”

This new system will accelerate research in computational physics, including theoretical physics and the development of innovative methods for understanding complex natural phenomena. 135 PowerEdge R840 servers make up Yukawa-21 – spec’d with 2nd Generation Intel Xeon Scalable Processors, Nvidia V100 Tensor Core GPUs, and Dell EMC PowerSwitch Z9332F interconnection.

“In addition to fueling our Institution’s studies on particle physics, nuclear physics, astrophysics, condensed matter physics, and quantum information physics, the new supercomputer will open up fresh research opportunities to further the frontiers of theoretical physics across Japan. Aka move over Godzilla!

Categories : Uncategorized
Comments (0)
Jan
15

Airtame In Action

By · Comments (0)

I can’t tell you how many hours I’ve blown at meetings for the first few minutes fiddling with cables, or apps and projection settings so I can start. There’s no shortage of ways of getting your screen displayed on a boardroom display or a projector but they’re generally tied to a specific platform like Chrome, or Apple etc.

Photo credit: lespounder on Visualhunt.com CC BY-SA

Enter Airtame – it works with Mac, Windows, Linux, Chromebook, Android, and iOS on any screen or projector with an HDMI. It’s also aimed firmly at business and education users and not at people wanting to stream video.

The Airtame is small but wide and sometimes you can’t plug it straight into a HDMI port but it’s supplied with a short HDMI extender in the box to cover that. Plus you need a USB port with enough grunt to power the Airtame via its Micro-USB connector – that’s the one complaint I have about Airtame as it would be awesome if they could engineer a way to power it without this additional requirement – but I won’t hold my breath.

Simple connection to HDMI

It’s easy to use as when it’s plugged in, it displays where to download the Airtime app for your device and its own network name – just click to name and setup the Airtame, and it auto downloads updates if they’re needed. To connect choose the device from the Airtame app and click Start to share your screen. By default graphics are streamed while audio comes from your computer – if you want to output audio to the screen you’re streaming too then there’s a one-second buffer to both audio and video.

A quirky feature is using a background screen to be shown by your Airtame when you’re not presenting – this could be your log, contact information or even a Call To Action offer that you have. Just don’t make it a picture of your family .. OK?!?!

Airtame is built for business – aka presentations – and not for video streaming. This is apparent with YouTube for instance. It’s also apparent with its connectivity and management standing out.

I wouldn’t pick Airtame as a cheap solution but the simplicity and manageability works for business users like me. It provides simple screen mirroring for lots of different devices with good-quality, along with flexible settings.

Categories : Uncategorized
Comments (0)

Researchers at Facebook and the University of Washington have released video of their Photo Wake-Up augmented reality project, which animates stationary characters from any still image.

Photo Wake-Up: 3D Character Animation from a Single Photo

Most current Augmented Reality applications are built for a specific image, like a billboard; this tool can however identify a human silhouette and generate a 3D version of the person to create a realistic animation for the figure to jump off the photo into the real world.

Digitizing and accelerating the speed of computer animation will enable transforming photography and artwork into something like the animated imagery in the Harry Potter movies. Imagine interacting with next-gen immersive advertisements (or art exhibits) where clouds move, cars drive, and animated 3D figures dance around you.

Categories : AR, VR
Comments (0)

In less than 10 years most workplace tasks will be done by machines rather than you and me say the World Economic Forum in their latest AI job forecast.

Machines will do more work than humans by 2025

The Future of Jobs 2018 report claims roughly 71% of tasks done by humans won't be by 2025. Roughly 29 percent of total hours worked in major industries today are machines. That calls for a rapid shift over the next 7 years and the report’s figures are extrapolated from human resource managers and corporate strategy experts surveys.

WEF predicts that in just 4 years, this ratio will begin to equalize (with 42 percent total hours accounted for by AI-geared robotics). But perhaps the report’s most staggering projection is that machine learning and digital automation will eliminate 75 million jobs by 2025.

However, as new industries emerge and technological access allows people to adopt never-before-heard-of professions, the WEF offers a hopeful alternative, predicting the creation of nearly 133 million new roles aided by the very technologies currently displacing many in our workforce.

Crystal balls: The report is thought-provoking and well put together. but it’s notoriously difficult to predict this kind of economic change. Estimates regarding the number of jobs that AI will destroy (or create) vary wildly.

Already, more than 57 million workers — nearly 36 percent of the U.S. workforce — freelance. And based on today’s workforce growth rates as assessed by 2017’s Freelancing in America report, the majority of America’s workforce will freelance by 2027. Advancements in connectivity, AI and data proliferation will free traditional professionals to provide the services we do best. Doctors supplemented by AI-driven diagnostics may take more advisory roles, teachers geared with personalized learning platforms will soon be freed to serve as mentors, and barriers to entry for entrepreneurs — regardless of socioeconomic background — will dramatically decline.

Categories : Uncategorized
Comments (0)

AI systems are diverse from an architectural standpoint, but there’s one thing they all share in common: datasets. The trouble though is these large sample sizes often need loads of data for accuracy (a state-of-the-art diagnostic system by Google’s DeepMind subsidiary required 15,000 scans from 7,500 patients). Some datasets are frankly harder to find than others.

AI generates synthetic scans of brain cancer

Nvidia, the Mayo Clinic, and the MGH and BWH Center for Clinical Data Science reckon they’ve come up with a solution: a neural network that generates it's own training data — specifically synthetic three-dimensional magnetic resonance images (MRIs) of brains with cancerous tumors.

By applying seemingly trivial concepts from one area to another — like using a GAN to create faces — breakthroughs like this can have a tangible positive impact. Look for Nvidia and team to fine-tune this approach to other types of cancer and disease in the brain to dramatically improve patient care.

“We show that for the first time we can generate brain images that can be used to train neural networks,” Hu Chang, a senior research scientist at Nvidia and a lead author on the paper, told VentureBeat in a phone interview.

The AI system was developed using Facebook’s PyTorch deep learning framework and trained with an Nvidia DGX platform. It uses a generative adversarial network (GAN) — a two-part neural network consisting of a generator that produces samples and a discriminator, which attempts to distinguish between the generated samples and real-world samples — to create convincing MRIs of abnormal brains.

The team sourced two publicly available datasets — the Alzheimer’s Disease Neuroimaging Initiative (ADNI) and the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) — to train the GAN, and set aside 20 percent of BRATS’ 264 studies for performance testing. Memory and compute restraints forced the team to downsample the scan from a resolution of 256 x 256 x 108 to 128 x 128 x 54, but they used the original images for comparison.

The generator was fed images from ADNI and , learned to produce synthetic brain scans (complete wit white matter, grey matter, and cerebral spinal fluid) from an image from the ADNI. Next, when it was set loose on the BRATS dataset it generated full segmentations with tumors.

This annotated the scans which take a team of human experts hours. But how’d it do? When the team trained a machine learning model using a combination of real brain scans and synthetic brain scans produced by the GAN, it achieved 80 percent accuracy — 14 percent better than a model trained on actual data alone.

“Many radiologists we’ve shown the system have expressed excitement,” Chang said. “They want to use it to generate more examples of rare diseases.”

It’s not the first time Nvidia have employed GANs in transforming brain scans. This summer, they demonstrated a system that could convert CT scans into 2D MRIs while another system could align MRI images in the same scene with superior speed and accuracy.

Categories : Uncategorized
Comments (0)

Whether or not your groceries arrives in a van with a driver behind the wheel might not to you so an increasing number of companies are investing in autonomous delivery vehicles to improve efficiency and create significant cost benefits in the long term. Yes … you’ll have to wait and see if those savings will be passed on to you, the customer.

Knock knock … who's there … your groceries!

Udelv from San Francisco has already used its autonomous vans for more than 700 driverless deliveries in the San Fran area. Now they've signed the world’s largest deal for a grocery delivery service using self-driving vehicles.

Just Like Kroger’s recent announcement with Nuro, this adds even more momentum to the autonomous vehicle space and particularly to the specialty vehicles concept. Delivery vehicles are often overlooked in analyses of adoption, congestion and regulatory planning. Could non-personal transport be a metric to watch for broad adoption of driverless cars?

First location is Oklahoma City, Oklahoma next year. 10 autonomous vans will transport orders to customers from local supermarkets that include Buy For Less, Uptown Grocery, and Smart Saver.

Like other self-driving delivery service trials, the electric vans will have safety drivers until both Udelv, and the regulators sign them off as fit for fully driverless operation.

Self driving groceries

“The vehicles will eventually cover thousands of miles of residential roads in what will be one of the largest autonomous driving deployments in the world,” Udelv said in a release.

Udelv’s van has a top speed of 25 mph and a range of around 60 miles. It delivers Level 4 autonomy, meaning it can operate in most scenarios with little to no human intervention.

The van has 18 compartments for storing customer orders. When it arrives for a delivery address, the customer receives a notification on their smartphone. They'll then meet the van and access their order by tapping a code to unlock the compartment with their items.

Udelv’s deal comes soon after another California-based startup, Nuro, launched a driverless delivery service — on a very small scale — in Arizona with supermarket chain Kroger. Like Udelv, Nuro also has a purpose-built autonomous vehicle, but it’s using self-driving Prius cars until final testing and certification of its own vehicle is done.

It’s not clear how this will benefit people who may find it difficult to move their groceries from the van to their home, such as senior citizens or those with disabilities. Until a box-carrying robot rolls up in the vans delivery runs will continue to have a human to carry the order into homes.

Autonomous vehicles are developing fast, though it’s likely to be some time before you have a chance of purchasing your very own fully driverless car. The industry is looking to exploit platforms that indicate a more measured rollout for the technology like self-driving taxi and shuttle services, as well as delivery services using driverless vehicles.

Categories : Uncategorized
Comments (0)

No bigger than a Band-Aid, and portable, wearable and possibly powered by a smartphone some Engineers at the University of British Columbia have developed a new ultrasound transducer (probe) that could lower the cost of ultrasound scanners to as little as $100.

As we're rapidly approaching a 1-trillion-plus sensor economy, where we'll be able to know anything, anywhere, at anytime sensors will augment our five biological senses with unthinkable data acquisition capabilities. Healthcare is one of the 1st areas that will benefit from sensors. Imagine our future where we no longer need to worry about curing cancer, because our personal tumor-seeking sensor-shell can detect early signs of cancer before cells even become cancerous. Sensors like this take us another step closer.

Current ultrasound scanners use piezoelectric crystals hooked up to a computer to create sonograms. These researchers swapepd piezoelectric crystals with tiny vibrating drums of polymer resin, called polyCMUTs (polymer capacitive micro-machined ultrasound transducers), which are cheaper to manufacture.

Sonograms produced by the UBC device were as sharp as or even more detailed than traditional sonograms produced by piezoelectric transducers, said the co-author and professor of electrical and computer engineering, Edmond Cretu.

"Since our transducer needs just 10 volts to operate, it can be powered by a smartphone, making it suitable for use in remote or low-power locations," he added. "And unlike rigid ultrasound probes, our transducer has the potential to be built into a flexible material that can be wrapped around the body for easier scanning and more detailed views—without dramatically increasing costs."

UBC researcher Carlos Gerardo shows new ultrasound transducer. Credit: Clare Kiernan, University of British Columbia

Robert Rohling, also a professor of electrical and computer engineering and a co-author, said the next step is to develop a wide range of prototypes then test their device in clinical situations.

"You could miniaturize these transducers and use them to look inside your arteries and veins. You could stick them on your chest and do live continuous monitoring of your heart in your daily life. It opens up so many different possibilities," said Rohling.

Categories : Uncategorized
Comments (0)

My rock song is a philanthropy song for Mercy Ships. Enjoy the OFFICIAL video of Hope On My Horizon.

4 out of 5 Stars [Kelly O’Niel] – “Peter Woolston’s music is a “similar timbre to Switchfoot … like U2 in it’s heyday … grand confident vocals … layered guitars … easily attract listeners worldwide”

Categories : Videos
Comments (0)