1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

My Artificial Intelligence Journey - Follow Me Into My Singularity

Discussion in 'AI - Artificial Intelligence in Digital Marketing' started by The Doctor, Aug 3, 2017.

  1. The Doctor

    The Doctor Jr. VIP Jr. VIP

    Joined:
    Dec 18, 2010
    Messages:
    987
    Likes Received:
    304
    Occupation:
    Computer Scientist, Engineer, Programmer.
    Location:
    ☆☆☆☆☆☆
    Home Page:
    I've been a programmer most all my life. I wish I would have gotten into machine learning algorithms at an early age but for some reason I didn't. I went to college for computer science but I dropped out because most of the things I was to be taught, I already knew. This is not one of those things.

    AI is everywhere and certainly in marketing but the average marketer doesn't have access to these algorithms. Not like a programmer does anyway. Machine learning is hard. It takes a lot of maths and dedicated learning. I have recently started down this path (Beyond simply knowing the abstract ideas) so I figured it would be a good idea to chronicle it so that others may benefit and so that in explaining things here, it will better solidify in my brain.

    To those ends I have just built a new computer that should be able to handle the algorithms. Here are the parts I've assembled in the machine:


    • Asus X99-E-10G WS Motherboard
    • 32GB Kingston RAM at 3200MHz
    • Intel i7-6850K CPU
    • 512GB Samsung M.2 NVMe NAND SSD
    • 2.5TB of 5 HDD pooled via mergerfs
    • NVIDIA Titan X Pascal GPU
    • Cosmos 2 Case
    • 850W EVGA Power Supply
    • Corsair h100i v2 Liquid CPU Cooler
    You don't need a crazy computer like this to learn ML. I just want to be able to train large networks quickly. I have setup my normal environment of Linux + KDE5. The machine flies. I have installed the CUDA development library and compiled Tensorflow with GPU support. Initial test shows TF is using the GPU.

    The first long-term goal is to get to a point where I can run predictive analysis of things like ad placements. I think a recurrent neural network would be best for that since they deal with timeseries data. Not 100% sure though. My first step has been to follow a Tensorflow tutorial. I have done that. I don't completely understand what's going on yet but I know a lot more than I did before. Let's go over some of the basics.

    What is an artificial neural network?
    ANNs are modeled after the biological neural networks in the brain:

    [​IMG]
    This is a crude drawing of two neurons. Information flows into the dendrites. Some sort of calculation/function is performed in the nucleus which decides if the neuron is going to fire or not. If it does fire, a signal is sent down the axon and through the synapse to the next neuron.

    So here's how we tend to approximate that in computers:


    [​IMG]
    On the left we have our input data (x1, x2 ,x3). There can be a lot more than 3. In the first tutorial I did, there were 784 input values. Those input values are all summed together but first, they're weighted (w1, w2, w3). You have the original input value multiplied by the weight and then they're all added together (Summed). The neuron, based on the input, will either fire or not. In ANNs, a neuron always produces a value whether it fires or not (1 for fire, 0 for not fire). To determine whether or not it fires, the data gets passed through an activation function:

    [​IMG]
    This shape above is the shape of an activation function called a step function (You can see why). It means that it's at 0 (Not fire) and if you pass a threshold of x, the return value for y is 1 (Fire). After that, the output of the neuron is fed into the input of another connected neuron:

    [​IMG]
    That will continue to happen for as many hidden layers as you have in your ANN. The step function was a good example for illustrating neurons but typically we don't use step functions. This is because step functions produce a 0 or a 1. In ANNs, we want more fine-tuned decisions than that such as a decimal number that can be anything from 0 to 1. To that end, we can use a sigmoid function which looks like this:


    [​IMG]

    Now let's go back to the previous illustration...

    [​IMG]


    Your output (y) is simply a function of your x's * your w's (Input data times the weights). It's also a function of your activation function. This will be clarified at a later date.

    [​IMG]
    We now illustrate a deep neural network. A deep net is simply a neural network which has two or more hidden layers. In my next post I will get into the first code I wrote. I'm using Python 3.6.2 on Linux. If you want to follow along and do this, you'll need to install:

    • Pyenv
    • Python 3 (Via Pyenv)
    • Tensorflow (With GPU support if you have a GPU)
    • A code editor (Such as VS Code)
    It's much easier to install Tensorflow on Linux than on Windows. If you're running Windows and don't have the time or patience to install Linux on your machine, you can install Linux in VirtualBox. If you're going to use VirtualBox, I recommend a lightweight Ubuntu-based distro such as Lubuntu. If you're going to install Linux on your computer normally (I very much recommend this), I suggest you use KDE Neon.

    You should already have a decent understanding of Python before attempting any of this. If you don't, the official Python tutorial will serve you well. Next time I'm going to attempt to break down training a classifier for the MNIST dataset (Recognizing handwritten digits). All of my posts in this thread will be in blue so you can easily find them. Stay tuned!
     
    • Thanks Thanks x 15
    Last edited: Aug 3, 2017
  2. Billy_Batts

    Billy_Batts Elite Member

    Joined:
    Dec 16, 2016
    Messages:
    2,443
    Likes Received:
    1,953
    Gender:
    Male
    Occupation:
    ♫♪.ılılıll|̲̅̅●̲̅̅|̲̅̅=̲̅̅|̲̅̅●̲̅̅|llılılı.♪♫
    Location:
    ı ̡̡͡|̲̲̲͡͡͡ ̲▫̲͡ ̲̲̲͡͡π̲̲͡͡ ̲̲͡▫̲̲͡͡ ̲|̡̡̡ *
    Home Page:
    There was an certain X Files episode back in the days you would have loved.
     
    • Thanks Thanks x 3
  3. The Doctor

    The Doctor Jr. VIP Jr. VIP

    Joined:
    Dec 18, 2010
    Messages:
    987
    Likes Received:
    304
    Occupation:
    Computer Scientist, Engineer, Programmer.
    Location:
    ☆☆☆☆☆☆
    Home Page:
    The one with the girl that's running from the AI? It's things like that which got me into programming.
     
  4. Dagreyon

    Dagreyon Jr. VIP Jr. VIP

    Joined:
    Dec 1, 2011
    Messages:
    1,871
    Likes Received:
    1,448
    Home Page:
    Interesting read. Can't wait to see your progress!
     
    • Thanks Thanks x 1
  5. The Doctor

    The Doctor Jr. VIP Jr. VIP

    Joined:
    Dec 18, 2010
    Messages:
    987
    Likes Received:
    304
    Occupation:
    Computer Scientist, Engineer, Programmer.
    Location:
    ☆☆☆☆☆☆
    Home Page:
    Thanks. Also forgot to mention. If along the way anyone has any questions, I'll try to answer them as best I can.
     
    Last edited: Aug 3, 2017
  6. MatthewGraham

    MatthewGraham Jr. VIP Jr. VIP

    Joined:
    Oct 6, 2015
    Messages:
    1,056
    Likes Received:
    931
    Gender:
    Male
    Occupation:
    Rolling Face on Keyboard
    Location:
    United States of America
    Home Page:
    Would certainly be interested to see the results of your project. Also, for anyone interested in the concept of neural networks in computer programming, watch the 6 minute video below. It's probably the easiest to follow explanation of the concept that I've come across and can easily be followed even with no computer science background.

     
    • Thanks Thanks x 6
  7. The Doctor

    The Doctor Jr. VIP Jr. VIP

    Joined:
    Dec 18, 2010
    Messages:
    987
    Likes Received:
    304
    Occupation:
    Computer Scientist, Engineer, Programmer.
    Location:
    ☆☆☆☆☆☆
    Home Page:
    I believe MarI/O combines genetic algorithms with neural networks. Genetic algorithms are actually pretty easy to understand. At some point in this thread I will code and break down MarI/O for everyone. Not ready for that yet though.
     
  8. Billy_Batts

    Billy_Batts Elite Member

    Joined:
    Dec 16, 2016
    Messages:
    2,443
    Likes Received:
    1,953
    Gender:
    Male
    Occupation:
    ♫♪.ılılıll|̲̅̅●̲̅̅|̲̅̅=̲̅̅|̲̅̅●̲̅̅|llılılı.♪♫
    Location:
    ı ̡̡͡|̲̲̲͡͡͡ ̲▫̲͡ ̲̲̲͡͡π̲̲͡͡ ̲̲͡▫̲̲͡͡ ̲|̡̡̡ *
    Home Page:
    That doesn't remind me of anything.

    But the episode was a guy in an trailer that found a way to transfer his mind into the internet.
     
  9. aidenhera

    aidenhera Elite Member

    Joined:
    Nov 30, 2016
    Messages:
    2,026
    Likes Received:
    448
    Gender:
    Male
    I think im dumb but ive read whole thread and still dont get what you plan to do with that sicko machine
     
  10. Writing Package

    Writing Package Jr. VIP Jr. VIP

    Joined:
    May 26, 2017
    Messages:
    149
    Likes Received:
    88
    Gender:
    Male
    Somehow I just got aroused by the fancy graphics, neural networks, and that rig! AWESOME Journey! This journey may be one of the most interesting journeys on BHW yet. I can't wait to see how it unfolds!
     
    • Thanks Thanks x 3
  11. The Doctor

    The Doctor Jr. VIP Jr. VIP

    Joined:
    Dec 18, 2010
    Messages:
    987
    Likes Received:
    304
    Occupation:
    Computer Scientist, Engineer, Programmer.
    Location:
    ☆☆☆☆☆☆
    Home Page:
    [​IMG]

    I'm ready to post my first update which is also like a second tutorial. In this part, I'll be explaining, as best I can (And with the help of text from the tutorial), how to make an image classifier for the MNIST dataset. MNIST is a collection of 60,000 training images and 10,000 test images (70,000 images total) of handwritten digits, 0-9 of different handwriting styles. Each image is 28x28 pixels. The images are black and white which to my mind means the neural net should have an easier time learning how to recognize the shapes since whether or not a pixel of the image is black can easily be represented as a 0 or a 1.

    I'm still not 100% knowledgeable on this. After the next tutorial (Manually preparing data to teach an ANN), I should know a lot more. The MNIST dataset is good for a first ANN but all the work of preparing the data is done for you so don't worry if you don't understand it all 100% from this part. Still, this takes you a long way. Let's begin...

    Before we get to the MNIST part, I'll go over what a basic Tensorflow program is like. Python isn't good for machine learning alone because Python is a scripting language. That means we need a library programmed in something like C++ (Like Tensorflow) to do the heavy lifting behind the scenes. We don't have to code any C++. We use Python to tell Tensorflow what to do. First, we define/describe our model in an abstract way. Then, we make that model a reality with a Tensorflow session. The description of the model is known as the computation graph. Let's construct an arbitrary graph. You can either save this code to a .py file or simply run the Python command and type them one by one into the Python interpreter:

    Code:
    import tensorflow as tf
    
    
    # creates nodes in a graph
    # "construction phase"
    x1 = tf.constant(5)
    x2 = tf.constant(6)
    We have just defined two Tensorflow variables as constants. Now we can use them. Let's do a multiplication:

    Code:
    result = tf.mul(x1,x2)
    
    print(result)
    At this point, we only have an abstract tensor defined. We haven't calculated anything, just created operations. Each operation (Or op) in our computation graph is a node in the graph. The calculate the result, we need to run the session. Generally, you build the graph first, then launch it:

    Code:
    # defines our session and launches graph
    sess = tf.Session()
    # runs result
    print(sess.run(result))
    We can also assign the output from the session to a variable:

    Code:
    output = sess.run(result)
    print(output)
    When you are finished with a session, you need to close it in order to free up the resources that were used:

    Code:
    sess.close()
    After closing, you can still reference that output variable, but you cannot do something like:

    Code:
    sess.run(result)
    ...which would just return an error. Another option you have is to utilize Python's with statement:

    Code:
    with tf.Session() as sess:
       output = sess.run(result)
       print(output)
    You can also use TensorFlow on multiple devices, and even multiple distributed machines. An example for running some computations on a specific GPU would be something like:

    Code:
    with tf.Session() as sess:
      with tf.device("/gpu:1"):
       matrix1 = tf.constant([[3., 3.]])
       matrix2 = tf.constant([[2.],[2.]])
       product = tf.matmul(matrix1, matrix2)
    That stuff was very straight forward but we're not here to just do simple math (Though the maths of neural networks is actually simple math when you deconstruct it).

    Our MNIST Classifier
    First, we take our input data, and we need to send it to hidden layer 1. Thus, we weight the input data, and send it to layer 1, where it will undergo the activation function, so the neuron can decide whether or not to fire and output some data to either the output layer, or another hidden layer. We will have three hidden layers in this example, making this a Deep Neural Network. From the output we get, we will compare that output to the intended output. We will use a cost function (alternatively called a loss function), to determine how wrong we are. Finally, we will use an optimizer function, Adam Optimizer in this case, to minimize the cost (how wrong we are). The way cost is minimized is by tinkering with the weights, with the goal of hopefully lowering the cost. How quickly we want to lower the cost is determined by the learning rate. The lower the value for learning rate, the slower we will learn, and the more likely we'll get better results. The higher the learning rate, the quicker we will learn, giving us faster training times, but also may suffer on the results. There are diminishing returns here, you cannot just keep lowering the learning rate and always do better, of course.

    The act of sending the data straight through our network means we're operating a feed forward neural network. The adjusting of weights backwards is our back propagation.

    We do this feeding forward and back propagation however many times we want. The cycle is called an epoch. We can pick any number we like for the number of epochs, but you would probably want to avoid too many, causing overfitment.

    After each epoch, we've hopefully further fine-tuned our weights, lowering our cost and improving accuracy. When we've finished all of the epochs, we can test using the testing set.

    For the first bit of code, we need to startup Tensorflow and import the MNIST data:

    Code:
    import tensorflow as tf
    
    from tensorflow.examples.tutorials.mnist import input_data
    mnist = input_data.read_data_sets("/tmp/data/", one_hot = True)
    Notice the part about one_hot. The term comes from electronics where just one element out of the others is either literally "Hot" or on. This is useful for classification tasks like this one (0,1,2,3,4,5,6,7,8,9). So rather than 0 being 0 and 1 being 1, we can represent 0-9 as:


    Code:
    0 = [1,0,0,0,0,0,0,0,0]
    
    1 = [0,1,0,0,0,0,0,0,0]
    2 = [0,0,1,0,0,0,0,0,0]
    3 = [0,0,0,1,0,0,0,0,0]
    ...
    I have learned that pretty much all data we work with in neural networks is to be represented in these sorts of vector arrays. Now we start building our model:

    Code:
    n_nodes_hl1 = 500
    
    n_nodes_hl2 = 500
    n_nodes_hl3 = 500
    n_classes = 10
    batch_size = 100
    We have just specified how many nodes each hidden layer will have, how many classes our dataset has, and what our batch size will be. We could process the entire dataset in one batch but if you don't have enough RAM, it will crash. Doing optimization is batches is not a bad thing. Now we will define placeholders for some values in our graph.

    Code:
    x = tf.placeholder('float', [None, 784])
    y = tf.placeholder('float')
    Remember that we're simply building a model that Tensorflow will then manipulate and work with. This will be more obvious after we finish and you look for where to modify the weights. Notice that we have used [None, 784] as the 2nd parameter in the first placeholder. We could have omitted it but then Tensorflow would have had to guess about the shape. If we're explicit about it, Tensorflow will throw an error if something out of shape attempts to occupy the variable. The reason we used 784 is because each image is 28x28 and we need to flatten them into binary vectors. If we flatten a 28x28 image into a vector of 0s and 1s, we have a vector size of 784 (28 * 28).

    Time to build the neural network model:

    Code:
    def neural_network_model(data):
    
    
        hidden_1_layer = {'weights': tf.Variable(tf.random_normal([784, n_nodes_hl1])),
        'biases': tf.Variable(tf.random_normal([n_nodes_hl1]))}
    
        hidden_2_layer = {'weights': tf.Variable(tf.random_normal([n_nodes_hl1, n_nodes_hl2])),
        'biases': tf.Variable(tf.random_normal([n_nodes_hl2]))}
    
        hidden_3_layer = {'weights': tf.Variable(tf.random_normal([n_nodes_hl2, n_nodes_hl3])),
        'biases': tf.Variable(tf.random_normal([n_nodes_hl3]))}
    
        output_layer = {'weights': tf.Variable(tf.random_normal([n_nodes_hl3, n_classes])),
        'biases': tf.Variable(tf.random_normal([n_classes]))}
    We have just started to define our weights and biases. The bias
    is a value that is added to our sums, before being passed through the activation function, not to be confused with a bias node, which is just a node that is always on. The purpose of the bias here is mainly to handle for scenarios where all neurons fired a 0 into the layer. A bias makes it possible that a neuron still fires out of that layer. A bias is as unique as the weights, and will need to be optimized too.

    All we've done so far is create a starting definition for our weights and biases. These definitions are just random values, for the shape that the layer's matrix should be (this is what tf.random_normal does for us, it outputs random values for the shape we want). Nothing has actually happened yet, and no flow (feed forward) has occurred yet. Let's start the flow:

    Code:
        # (input_data * weights) + biases
    
    
        l1 = tf.add(tf.matmul(data, hidden_1_layer['weights']), hidden_1_layer['biases'])
        l1 = tf.nn.relu(l1)
    
        l2 = tf.add(tf.matmul(l1, hidden_2_layer['weights']), hidden_2_layer['biases'])
        l2 = tf.nn.relu(l2)
    
        l3 = tf.add(tf.matmul(l2, hidden_3_layer['weights']), hidden_3_layer['biases'])
        l3 = tf.nn.relu(l3)
    
        output = tf.add(tf.matmul(l3, output_layer['weights']), output_layer['biases'])
    
        return output
    Here, we take values into layer one. What are the values? They are the multiplication of the raw input data multiplied by their unique weights (starting as random, but will be optimized): tf.matmul(l1,hidden_2_layer['weights']). We then are adding, with tf.add the bias. We repeat this process for each of the hidden layers, all the way down to our output, where we have the final values still being the multiplication of the input and the weights, plus the output layer's bias values.

    When done, we simply return that output layer. So now, we've modeled the network, and have almost completed the entire computation graph. Next, we're going to build a function that actually runs and trains the network with TensorFlow. We will setup the training process that will run in the Tensorflow session:

    Code:
    def train_neural_network(x):
        prediction = neural_network_model(x)
        cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=prediction, labels=y) )
    In this function, we pass it data and produce a prediction based on the output of that data through the neural_network_model. Next, we create a cost variable. This measures how wrong we are. We want to minimize this variable by manipulating the weights. The cost function is also known as the loss function. In this neural network, we will optimize our cost with the AdamOptimizer, which is a popular optimizer along with others such as Stochastic Gradient Descent and AdaGrad:

    Code:
        optimizer = tf.train.AdamOptimizer().minimize(cost)
    Within AdamOptimizer(), you can optionally specify the learning_rate as a parameter. The default is 0.001, which is fine for most circumstances. Now that we have these things defined, we're going to begin the session:

    Code:
    #cycles of feed forward  + backprop
        hm_epocs = 10
        with tf.Session() as sess:
            sess.run(tf.global_variables_initializer())
    First, we have a quick hm_epochs variable which will determine how many epochs to have (cycles of feed forward and back prop). Next, we're utilizing the with syntax for our session's opening and closing as discussed in the previous tutorial. To begin, we initialize all of our variables. Now come the main steps:

    Code:
            for epoch in range(hm_epocs):
                epoch_loss = 0
                for _ in range(int(mnist.train.num_examples / batch_size)):
                    epoch_x, epoch_y = mnist.train.next_batch(batch_size)
                    _, c = sess.run([optimizer, cost], feed_dict = {x: epoch_x, y: epoch_y})
                    epoch_loss += c
                print('Epoch', epoch, 'completed out of', hm_epocs, 'loss:', epoch_loss)
    For each epoch, and for each batch in our data, we're going to run our optimizer and cost against our batch of data. To keep track of our loss/cost at each step of the way, we are adding the total cost per epoch up. For each epoch, we output the loss, which should be declining each time. This can be useful to track, so you can see the diminishing returns over time. The first few epochs should have massive improvements, but after about 10 or 20 you will be seeing very small, if any, changes, or you may actually get worse.

    Now, outside of the epoch for loop:

    Code:
            correct = tf.equal(tf.argmax(prediction, 1), tf.argmax(y, 1))
    This will tell us how many predictions we made that were perfect matches to their labels.

    Code:
            accuracy = tf.reduce_mean(tf.cast(correct, 'float'))
            print('Accuracy:', accuracy.eval({x:mnist.test.images, y:mnist.test.labels}))
    Now we have our ending accuracy on the testing set. Now all we have to do:

    Code:
    train_neural_network(x)
    Somewhere between 10 and 20 epochs should give you ~95% accuracy. 95% accuracy, sounds great, but is actually considered to be very bad compared to more popular methods. I actually think 95% accuracy, with this model is nothing short of amazing. Consider that the only information we gave to our network was pixel values, that's it. We did not tell it about looking for patterns, or how to tell a 4 from a 9, or a 1 from a 8. The network simply figured it out with an inner model, based purely on pixel values to start, and achieved 95% accuracy. That's amazing to me, though state of the art is over 99%.

    Full code up to this point: https://hastebin.com/ujuhuyapol.py

    What Did We Learn?
    This is probably the most important part of this whole post because initially, it's often difficult to step through all this as one coherent thought. That's probably because there's a lot that isn't explained here. For instance, we're using TF functions like AdamOptimizer(). How does the AdamOptimizer work under the hood? I have no fucking clue. On top of things like that, neural networks themselves are largely a black box where even the best computer scientists can't explain how it's able to learn. Anyway, on with it...

    First, we took in some input data. The MNIST data came pre-processed so we didn't have to worry about formatting it. We told TF to use one_hot so we could represent that classes as such. We then set how many nodes we wanted for each of our hidden layers as well as how many classes we wanted to work with and our batch size. Then we setup placeholders for x and y.

    We then went on to build our model. We defined 3 hidden layers and an output layer. WE defined biases to be added to our sums before they're passed through our activation functions. We set our initial weights to random values which told TF what the shape of our layers' matrix should be. We then defined our network's flow.

    Our flow between our layers is defined as input -> L1 -> L2 -> L3 -> output. Our input is multiplied by the weights and goes to our hidden layers. Remember, our initial weights are random. We then sum it all together and add bias, then pass it onto the next layer until finally, for the output layer, the outputs from the last layer are multiplied by the weights and the bias added.

    Next, we setup our training process. We make a prediction based on the output of our model, we calculate our cost, then we optimize our weights. We feed the data forward and then backpropagate through the network. We defined how many epochs would happen which is to say how many cycles of feed forward and backprop would take place. For each epoch, we ran our optimizer and cost function against our batch of data. For each epoch, we output the loss (Which should decline each epoch). We then calculate our accuracy, call our training function and done.

    The only thing I'm not sure how to do at this point is feed in my own handwritten images to test against the network I trained. In the next tutorial, I'll be formatting my own data so that should become obvious to me at that point. I hope you all liked this post. If you have any questions or simply want to talk about what's going on here, post a reply.
     
    • Thanks Thanks x 7
  12. vin42

    vin42 Junior Member

    Joined:
    May 24, 2016
    Messages:
    159
    Likes Received:
    29
    Location:
    WWW
    I am not a programmer so most of the post went above my head....but AI definitely is the future. I read many articles that AI is already generating content for big media houses using large amounts of structured data. Eg -
    http://concured.com/
    https://www.kentico.com
    http://www.narrativa.com/
    https://automatedinsights.com/wordsmith

    But what I want to know is that whether content can be generated by feeding video, images, text and unstructured data to a neural network. I found some sites which can generate content using AI. Eg -
    http://articoolo.com/
    http://contentop.com/
    http://www.ai-writer.com
    There's also an AI storyteller which creates stories based on an Image...quite interesting! (https://github.com/ryankiros/neural-storyteller)

    Just to inform you that IBM released an api for its watson quantum computer (https://developer.ibm.com/dwblog/2017/quantum-computing-api-sdk-david-lubensky/).
    Can a neural network run on a desktop computer?
     
  13. The Doctor

    The Doctor Jr. VIP Jr. VIP

    Joined:
    Dec 18, 2010
    Messages:
    987
    Likes Received:
    304
    Occupation:
    Computer Scientist, Engineer, Programmer.
    Location:
    ☆☆☆☆☆☆
    Home Page:
    Sure, generating content based on video can be done and it isn't the most difficult task I can think of (Still not easy). As for IBM, they do a lot of good work and their APIs are great. A correction though: Watson is one of their AIs. The quantum API is separate from their AI (From the perspective of anyone using the API AFAIK) though the Watson team works on it.

    Yes, you can run neural networks on your desktop. I'm running them on my desktop. I described my computer's parts in the first post but you don't need the crazy hardware I bought. You could run them on any old computer. It just means it will take longer to train your networks. There is of course a limit to the complexity of the networks you can run on a really shitty computer but in that case you could just run them on Amazon EC2 instances for super cheap. Most people should be able to follow along with what I'm doing here on their computer.
     
  14. tux

    tux Jr. VIP Jr. VIP

    Joined:
    Jul 11, 2016
    Messages:
    1,301
    Likes Received:
    717
    Gender:
    Male
    your bed awaits my friend, go to sleep :p , interesting read btw, i'll be following
     
    • Thanks Thanks x 1
  15. The Doctor

    The Doctor Jr. VIP Jr. VIP

    Joined:
    Dec 18, 2010
    Messages:
    987
    Likes Received:
    304
    Occupation:
    Computer Scientist, Engineer, Programmer.
    Location:
    ☆☆☆☆☆☆
    Home Page:
    thanks. after 1 more episode of Fear The Walking Dead, I promise.
     
  16. tux

    tux Jr. VIP Jr. VIP

    Joined:
    Jul 11, 2016
    Messages:
    1,301
    Likes Received:
    717
    Gender:
    Male
    that shit sucks- i mean come on zombies swim in that tv show are you for real ? Plus you said there is an ig bot you are working on but all i see is machine learning and ai
     
  17. lelou00

    lelou00 Power Member

    Joined:
    Feb 23, 2012
    Messages:
    546
    Likes Received:
    160
    Occupation:
    Multipreneur
    Location:
    Isla Paloma
    While I don't really get into programming (just a minor coding exp with arduino), I am really interested in AI and ML itself. I would love to learn more about this one, and hope I can learn a lot from you here.

    Do you think it is possible to create a personal assistant that conduct research for you? Like for example if I want to do SEO, I let my AI research everything about SEO and later on I just consult him on what to do, which one is good, which one is bad etc2?
     
  18. The Doctor

    The Doctor Jr. VIP Jr. VIP

    Joined:
    Dec 18, 2010
    Messages:
    987
    Likes Received:
    304
    Occupation:
    Computer Scientist, Engineer, Programmer.
    Location:
    ☆☆☆☆☆☆
    Home Page:
    I like the original far better but I've seen them all. The bot I'm closest to having ready is my SEO bot but most of my time is spent on Infiniproxy. It's a lot of work. The AI stuff is what I do in between work. It's simply unacceptable to not have mastered it. I see a near future where the only jobs are AI jobs.

    A personal assistant like that is possible but would be a ton of work. I saw a similar Kickstarter but it didn't get funded (I don't think it was all that great though). I guess it depends on the quality of research you expect to get out of it. One of the holy grails of AI is to have an AI scientist. A basic research tool? Yes, could be done now. SEO AI? Sure. It could even automatically provision SEO resources for you and work to increase rank on its own. A scientist in its own right? Not quite there yet but I pretty much know we'll get there soon.
     
    • Thanks Thanks x 2
  19. Kabone

    Kabone Regular Member

    Joined:
    Apr 25, 2011
    Messages:
    200
    Likes Received:
    59
    Gender:
    Male
    Occupation:
    Bot Development
    Location:
    boise, Idaho
    Awesome stuff. Will be following this closely...
     
    • Thanks Thanks x 1
  20. Writing Package

    Writing Package Jr. VIP Jr. VIP

    Joined:
    May 26, 2017
    Messages:
    149
    Likes Received:
    88
    Gender:
    Male
    I just got an erection. This is so cool!
     
    • Thanks Thanks x 3