Why cant i leave you by ai analysis
A woman, married to a lazy, unambitious farmer, stays with him—at first, it seems, from pity. She realizes that given his laziness and the drought he may not survive without her help—and this becomes another reason, perhaps excuse, for staying with him.
Does she mock him? The couple give to each other, not out of some deep love and even in the midst of conflict, because both the man and the woman know the other will appreciate the efforts at giving one another pleasure.
What limited love they share gives their lives some shade of meaning. In a life already stripped of its possibilities, a little bit of pleasure may be all that a man and a woman can ask of each other. For a machine to produce work that sounded human-generated, Bhatnagar realized, he had to strike a delicate balance between accuracy and authenticity. On a whim, Bhatnagar looked in the debug logs of Pentametron, a repository of tweets that had been discarded because they were close to iambic pentameter but not perfect.
Text in this trash heap had been stripped of punctuation and capitalization after being disposed. Bhatnagar built a new program to comb through this corpus and write sonnets with enjambed lines—that is, phrases that flow over naturally from one line to the next:. I wanna be a little kid again. You should unwind a little now and then. Team Stacie looking like a sleepy hoe. Back to the Sunshine State. The devil is a lie.
I hate myself a lot sometimes, I mean, possessive, holy shit, this is the second time. I love a windy sunny day. Not coming out until tonight. I miss the happy me.
I gotta find a way. This is more poetry as collage than true composition. My wonderfully creative second-grade teacher, Mrs. Clack, had us spend all kinds of time doing special projects; it was the year I learned what a limerick was. Clack, I tried my hand at making some little limericks of my own. I loved the wordplay and also the formulaic nature of the composition; perhaps this was an early stirring of the mathematician and computer scientist I would become.
I ran this little poetry program in my head again and again, turning out dozens of nonsense limericks, complete with the requisite little-kid scatology. Years later, when Mrs. So, if kids can learn to write poems from scratch, what about machines? I was able to start composing after reading just a handful of examples.
With machine learning, the latest A. Her tool-using robot took a few days to learn a relatively simple task, but it did not require heavy monitoring. She imagines one day having lots of robots out in the world left to their own devices, learning around the clock. This should be possible — after all, this is how people gain an understanding of the world.
A baby can also recognize new examples from just a few data points: even if they have never seen a giraffe before, they can still learn to spot one after seeing it once or twice. Part of the reason this works so quickly is because the baby has seen many other living things, if not giraffes, so is already familiar with their salient features.
A catch-all term for granting these kinds of abilities to AIs is transfer learning: the idea being to transfer the knowledge gained from previous rounds of training to another task. One way to do this is to reuse all or part of a pre-trained network as the starting point when training for a new task.
For example, reusing parts of a DNN that has already been trained to identify one type of animal — such as those layers that recognize basic body shape — could give a new network the edge when learning to identify a giraffe. An extreme form of transfer learning aims to train a new network by showing it just a handful of examples, and sometimes only one. Known as one-shot or few-shot learning, this relies heavily on pre-trained DNNs.
Imagine you want to build a facial-recognition system that identifies people in a criminal database. A quick way is to use a DNN that has already seen millions of faces not necessarily those in the database so that it has a good idea of salient features, such as the shapes of noses and jaws. Now, when the network looks at just one instance of a new face, it can extract a useful feature set from that image.
It can then compare how similar that feature set is to those of single images in the criminal database, and find the closest match. Having a pre-trained memory of this kind can help AIs to recognize new examples without needing to see lots of patterns, which could speed up learning with robots. But such DNNs might still be at a loss when confronted with anything too far from their experience. In other words, the AI was guided in how best to learn from its environment. Chollet thinks that an important next step in AI will be to give DNNs the ability to write their own such algorithms, rather than using code provided by humans.
Supplementing basic pattern-matching with reasoning abilities would make AIs better at dealing with inputs beyond their comfort zone, he argues. Computer scientists have for years studied program synthesis, in which a computer generates code automatically. Combining that field with deep learning could lead to systems with DNNs that are much closer to the abstract mental models that humans use, Chollet thinks.
In robotics, for instance, computer scientist Kristen Grauman at Facebook AI Research in Menlo Park, California, and the University of Texas at Austin is teaching robots how best to explore new environments for themselves. This can involve picking in which directions to look when presented with new scenes, for instance, and which way to manipulate an object to best understand its shape or purpose. The idea is to get the AI to predict which new viewpoint or angle will give it the most useful new data to learn from.
There is not much theory behind deep learning, says Song. You just have to try things. For the moment, although scientists recognize the brittleness of DNNs and their reliance on large amounts of data, most say that the technique is here to stay. The realization this decade that neural networks — allied with enormous computing resources — can be trained to recognize patterns so well remains a revelation.
Eykholt, K. Vision Pattern Recog. Article Google Scholar. Finlayson, S. Science , — PubMed Article Google Scholar. Elsayed, G. Szegedy, C. The difference between waste and missed opportunity sometimes is difficult to quantify. Nevertheless, even an approximation of the asymmetric cost is worth calculating. Otherwise, decisions may be made based on AI predictions that are accurate on some measures but inaccurate on outcomes with a disproportionate impact on the business objective.
Addressing aggregation. The risk here is that humans are, by and large, reluctant to change. But why should they keep making those decisions at the same pace? With the exact same constraints? As we saw earlier, this sometimes results in failure. The way to solve this problem is by conducting two analyses. In the first, the team should examine how it could eliminate waste and missed opportunities through other marketing actions that might result from the predictions generated. The intervention that the team at the telecom firm considered was a retention discount.
What if the team incorporated other incentives in the decision? Could it predict who would be receptive to those incentives? Could it use AI to tell which incentive would work best with each type of customer? The second type of analysis should quantify the potential gains of making AI predictions more frequently or more granular or both.
While changing the way the decisions were made would obviously incur costs, would the retailer find that the benefits outweighed them? Marketing needs AI. But AI needs marketing thinking to realize its full potential.
This requires the marketing and data science teams to have a constant dialogue so that they can understand how to move from a theoretical solution to something that can be implemented. As marketers and data scientists use this framework, they must establish an environment that allows a transparent review of performance and regular iterations on approach—always recognizing that the objective is not perfection but ongoing improvement.
You have 1 free article s left this month. You are reading your last free article for this month. Subscribe for unlimited access. Create an account to read 2 more. Alignment: Failure to Ask the Right Question The real concern of the managers at our telecom firm should not have been identifying potential defectors; it should have been figuring out how to use marketing dollars to reduce churn.
Aggregation: Failure to Leverage Granular Predictions Firms generate torrents of customer and operational data, which standard AI tools can use to make detailed, high-frequency predictions.
0コメント