📜 ⬆️ ⬇️

Thoughts about the technology entry threshold in 2018, an example of a simple mobile application and not only


I once studied in the 5th grade and now for some reason it seems to me that between me and the guys who now go to the 5th grade is a huge gap in terms of access to technology. And as technology develops faster and faster, I wonder what will happen when the guys who now go to the 5th grade will become my peers.

In this short article, using the example of a simple mobile iPhone application, I want to show how accessible the technology is.

Some lyrics


About 20 years have passed since my eyes widened to the limit at the sight of an animated paper clip on a computer screen that offered to help me. The clip then seemed omnipotent, between what she could and the magic for me was not much difference.


')
Just 13 years ago, I first held the PDA in my hands (this is a pocket personal computer). Computer. Pocket. C Windows. It does not work from the 220 V network, but from the battery, it has access to the Internet. Without wires and cards Web Plas. Internet access. In the same Internet, 5 minute access to which was received as a gift for great services, our group of computer circles. We were allowed to visit 1 site (33 kbps modem). The whole group had a long discussion about what kind of site it would be. Voting, debates.



Fast forward to May 2018


California hosts the Google I / O 2018 conference, among dozens of other announcements: the ML Kit was added to the Firebase service - a tool that makes it possible to recognize the content of pictures, faces, text and many other things, even TensorFlow models, on a smartphone or in the cloud. Eka nevidal? What we do not know about machine learning and neural networks?

Okay, let's make an application for the text to recognize.
Open Xcode - create a new project. Create a Podfile where we write the code below and install it:

pod 'Firebase/Core' pod 'Firebase/MLVision' pod 'Firebase/MLVisionTextModel' 

The interface is simple:




To give iPhone access to the camera, add to Info.plist

 <key>NSCameraUsageDescription</key> <string>    </string> 

Create a controller, connect the UIImageView to it and click on our two UIButton.
And even if we managed to recognize what we had, the smartphone will voice it.

 import UIKit import Firebase import AVKit import AVFoundation class ViewController: UIViewController, UIImagePickerControllerDelegate, UINavigationControllerDelegate { //    lazy var vision = Vision.vision() var textDetector: VisionTextDetector? //  let synthesizer = AVSpeechSynthesizer() override func viewDidLoad() { super.viewDidLoad() textDetector = vision.textDetector() } // UIImageView @IBOutlet weak var imagePicked: UIImageView! //      @IBAction func openCamera(_ sender: Any) { if UIImagePickerController.isSourceTypeAvailable(.camera) { var imagePicker = UIImagePickerController() imagePicker.delegate = self imagePicker.sourceType = .camera imagePicker.allowsEditing = false self.present(imagePicker, animated: true, completion: nil) } } //         @IBAction func getText(_ sender: Any) { if let image = imagePicked.image as? UIImage { let visionImage = VisionImage(image: image) textDetector?.detect(in: visionImage, completion: { [weak self] (visionTextItems, error) in if error != nil { print("  \(error)") return } if let foundItems = visionTextItems { if foundItems.count > 0 { for item in foundItems { print(" : \(item.text)") let utterance = AVSpeechUtterance(string: item.text) utterance.rate = 0.4 utterance.voice = AVSpeechSynthesisVoice(language: "en-US") self?.synthesizer.speak(utterance) } } else { print("no images found") } } }) } } //     UIImageView func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : AnyObject]) { let image = info[UIImagePickerControllerOriginalImage] as! UIImage imagePicked.image = image dismiss(animated:true, completion: nil) } } 

We start, we write with a pen on a leaflet (the text is completely uninteresting), we photograph



and look at the console:



Oh yes! All this works in airplane mode, that is, without access to the Internet. And handwriting recognition, and speech synthesis.


Tools, methods and tasks that seemed very complicated yesterday, today are available quickly and for free right out of the box. And what will happen next, after 20 years, and after 50 years?

Source: https://habr.com/ru/post/358394/


All Articles