Runboard.com
Слава Україні!

runboard.com       Sign up (learn about it) | Sign in (lost password?)

 
Spikosauropod Profile
Live feed
Blog
Friends
Miscellaneous info

Parliamentarian

Registered: 06-2007
Reply | Quote
A Simple Method for Commonsense Reasoning


Commonsense reasoning is a long-standing challenge for deep learning. For example, it is difficult to use neural networks to tackle the Winograd Schema dataset~cite{levesque2011winograd}. In this paper, we present a simple method for commonsense reasoning with neural networks, using unsupervised learning. Key to our method is the use of language models, trained on a massive amount of unlabled data, to score multiple choice questions posed by commonsense reasoning tests. On both Pronoun Disambiguation and Winograd Schema challenges, our models outperform previous state-of-the-art methods by a large margin, without using expensive annotated knowledge bases or hand-engineered features. We train an array of large RNN language models that operate at word or character level on LM-1-Billion, CommonCrawl, SQuAD, Gutenberg Books, and a customized corpus for this task and show that diversity of training data plays an important role in test performance. Further analysis also shows that our system successfully discovers important features of the context that decide the correct answer, indicating a good grasp of commonsense knowledge.
https://arxiv.org/abs/1806.02847
6/10/2018, 6:55 pm Link to this post PM Spikosauropod
 
Animecat Profile
Live feed
Blog
Friends
Miscellaneous info

Registered user

Registered: 12-2017
Reply | Quote
Re: A Simple Method for Commonsense Reasoning


Remarkably on the later benchmark, we are able to achieve 63.7% accuracy, comparing to 52.8% accuracy of the previous state-of-the-art, who utilizes supervised learning
6/11/2018, 7:39 pm Link to this post  
 
greendocnowciv Profile
Live feed
Blog
Friends
Miscellaneous info



Registered: 11-2017
Reply | Quote
Re: A Simple Method for Commonsense Reasoning


This may be a true boon to academia.

As this improves and becomes better than the average human reviewer - and keeps improving - that will be curative regarding a current "illness" in academic journal quality and academic peer review standards.

It will face some spotty success, at first. Some places that have widespread reps for being really nasty will appear to survive screening. Ah - but then QA checks from "higher AI" will show clear calibration irregularities.

Standards then get firmed up. Penalties are established for not maintaining calibration.

When reliable computer reviews show some Uni to be uncalibrated, and the Uni keeps failing, then Uni X will not get its funding and will be deprived of accreditation ability.

It stays "in coventry" until it shows that its computer reviews are calibrated.

Several Uni's will hold out. "True believers" have shown throughout history that they can wait for a long time. SJW leadership will be pulling out all the stops, trying to get this or that politician to make an exception.

Personal AI's will have long been "bootlegged" for these SJWs for a while. That will have allowed them to survive comfortably in their bubbles as the rest of society gets rapidly cleaned of clearly nonsensical verbiage.

But a need for salaries and the Admin desire to be able to say to prospective students "come back! Our accreditation is returning" will keep growing.

Finally those issues will produce a forceful...oh, lets say a "vapor pressure" that will build until the idea "give up" makes so much sense to so many that finally, finally, the horrid SJW's will give up control of academia.
6/12/2018, 1:16 pm Link to this post PM greendocnowciv Blog
 


Add a reply





You are not logged in (login)
Back To Top