MyFakeApp

MyFakeApp – FakeApp alternative

DON’T RECOMMEND DOWNLOADING THIS. IT IS NOT FINISHED AND IT WILL NOT BE IN THE NEAR FUTURE. I DON’T HAVE TIME FOR THIS. YOU CAN DO WHATEVER YOU WANT WITH MY CODE. SORRY!

This application can change faces in videos. Please don’t use it to make porn if you do not have permission from the person you want to see to do some nasty stuff.

This article is about my version of FakeApp from DeepFakes. FakeApp v1 or v2 didn’t work for me so I implemented my own User Interface.

I am reading all the comments. So if you want to improve something write it here.

Known issues (needed improvements):

  • configurable temporary directory
  • running “Extract frames” and “Extract faces” on GPU
  • CNN takes a long time to recognize a face (also this)
  • build Tensorflow with avx, sse
  • save a copy of merged files to your data folder
  • way to merge the images without using a video

Backend of my application is downloaded from FaceSwap repository. It is running on standalone version of Python (WinPython).

The rest is implemented in C# using WPF application. This app should be standalone and the only thing you need to install is CUDA 9.0 and cuDNN 7.0 if you want to use GPU instead of CPU. Note that if you want to use GPU, you have to have graphic card with CUDA capability at least 3.0.

You can download installer of MyFakeApp here (only x64 for Windows):

MyFakeApp.msi

You can also modify this application by downloading my repository (Visual Studio solution):

MyFakeApp repository

Note: The first build will take a long time, because it is extracting FaceSwap application from zip.

ScreenShots:

Advertisements

175 thoughts on “MyFakeApp – FakeApp alternative”

  1. Hi Team? How to stop and save training so it’ll pick up at same marker later? I lost 6 hours of training when I closed the app. /: I’m now into 16 hours of training and see vast improvement, but on CPU I may need to stop the app to work on other programs. How to save and ensure i don’t lose the training again?

    Like

      1. CRIS – DeepFakes needs Tensor Flow in order to work. Tensor Flow is Google’s algorithm and it was included in the 9.0 update of CUDA. It was removed in the 9.1 version of CUDA. So you need to convince NVidia and Google to include it.

        Like

      1. does it take like 6 to 12 hours or like days or weeks to train?
        i just wanted to experiment if it takes like weeks to train and if so i would grab one of the Vcards that has cuda supported.

        Like

      2. Do yourself a favor and get yourself a legit GPU! I’m on day 3 or training a 15 second video with CPU. Is the end even near!? I have CPU 36 GB RAM and a lame 856MB video card. Pain. Anyone know how to save training?

        Like

  2. HI, i want to ask about the Convert Video section

    1. Converter
    Which one is the best for resulting a video, masked, adjust, or gan ? and what does the different from all of em ?

    2. What is blur size, mask type, seamless mode, and kernel size used for if i select masked converter ?

    3. What is smooth mask and average color if i choose adjust ?

    Thankyou

    Like

  3. Hey there, thanks for all your hard work, thanks to you I was able to finally able to start making these, I have an AMD GPU so I couldn’t get FakeApp working and in my opinion aside from some optimizations which you’ve already mentioned with the cpu, I’m pretty happy with this. My question is, how often do you play to update this? and when is the 2nd revision? 3-8x speed improvement is quite substantial especially with how slow the cpu is. I’d rather you release a working update than a broken one, and I don’t need the specific hour, but do you have a rough time estimate? I’d really appreciate an answer of your best guesstimate. Also, is there a way to implement AMD GPU into this? it seems about half the people that want to try this are missing out by having the wrong GPU and right now the GPU market is ridiculously overpriced, I understand it has more to do with tensorflow than the app, but is there a way you could use some of the scripts with amd/linux combination which have let some gpu users use their GPU for deepfaking on FakeApp and implement it into myfakeapp? thanks again for your time and dedication and releasing this for free and open source.

    Like

      1. I lost 18 hours of training when I closed the app so I could use CPU’s resources on another program. When I relaunched the app it started training the same faces from the beginning. Nothing was saved. Please how to close the app during training and relaunch it from where it left off? Where to access those saved processes and relaunch? Thank you in advance.

        Like

      2. It’s simple, press into the “faces” (the window with the grid of monsters), and press enter, It will not inmediately close (need to end the last cycle) and save all.
        To restart, use the same parameter and the first thing that the program will try to do is find if there is a model to continue working on.

        Like

      3. Thank you José C V! When I relaunch the program next can I select the training tab and pick up from there? Will it be auto-filled with the previous training parameters?

        Like

  4. I started reinstalling CUDA 9.0 (from network installer, not the normal .exe) because if not, then everything fails.
    The installed cuDNN 7.0, for the version of my CUDA, everything works fine.
    Putted the paths, extracted some faces from the two videos and when trying to train… another ERROR.
    This time it seems to be error from tensorflow, it says the resource is exhausted and with GPU it doesn’t work (with cpu it works slow but works).
    Tryed to change de Batch size to, litterally, all the possible options.
    I got Windows 7 with a GTX 670. Anyone have a solution?

    Thanks!

    P.D. The GPU ussage (readed from Windows) never reaches 4GB and don’t know why neither.

    Like

      1. the software uses a little ram and almost all the Vram … 2gb of vram it is too little! 4gb is the minimum

        Like

  5. I get the message that the train files can’t be loaded when i stopped and later tried to resume a training. Is it something i did wrong or isn’t it possible to restart training?
    Also when i tried to compile the movie with the trained face i had thus far (0.02) i got a movie where the face of the model was a grey square.

    Like

  6. When training via GAN, which loss number do I actually care about. There are four; DA, DB, GA, GB; I would think it would be DA or GA can anyone confirm. Also, training on a 1080ti; how big should my batch size be?

    Like

  7. Hi. This seems like a well put together project I ran the extract frames, then the extract faces, but I’m not sure then what INPUT DATA A vs INPUT DATA B – what’s the difference? I only have one set of data now, the extracted faces, what am I missing? I tried just using one of the fields (input data A as the folder with the extracted faces) but it crashed. when I hit debug, it just crashes again. Windows 10 x64, i5 @ 3.4GHz, 8GB RAM, GTX 760 with 2GB RAM – is there anything I can do to fix this? thanks for your work on this and your time !

    Liked by 1 person

    1. You should have to directories one with the face of model a (the face that you want to show in your final clip) and one with the face of model b (the model that is going to be replaced with model a’s face).
      So input A points to the first directory en Input B points to the other.

      If you ended up with one directory with both the models faces it wont work.

      Like

      1. “Set A will be the original face that you want to replace. Set B will be the desired face that you will substitute into Set A. Just remember that A = original, B = desired.”
        Extracted from other tutorial, and I thinks that’s correct, can anyone verify?

        Like

      2. I had the same question, so in fact, the trained Model can be use to replace face A by face B (by default) or to replace face B by face A. There is a checkbox “Swap model” to tick in the “Convert video” tab in that case. Hopes it helps. By the way, great thx to the developper.

        Like

  8. Bug Report: Frames extracted from .webm files, but when attempting to Convert Video the program will throw a cannot create frames message. Should be relatively simple to fix I would think; else, if you want to use .webm files as your samples you should convert them to mp4 first.

    Like

  9. Can you create a separate output tab for image galleries that allows you to face swap an imagery gallery? The HOG face detector doesn’t detect the same face in extract vs convert. When I run the HOG face detector on only the faces that extract picked up some of the images were not picked up.

    Like

  10. Could you allow pull requests on Bitbucket? I have added several new features to my offline version, if you enable them I can make a pull request and you can cherry pick the features you’d like to incorporate into your codebase.

    Like

    1. I try to activate the face 512p training feature that seems to work 🙂

      Roderick, what new functions do you create?

      Like

      1. I have refactored here and there and the app now remembers the values of all the text boxes and other input fields. (I have lost quite a few models through loading them with the incorrect settings)

        I am currently working on saving the output from the scripts to display to the user in case of a crash or error (instead of just crashing to desktop) and I am tracking down a bug that causes the application (not the python processes, just the GUI) to randomly crash.

        I’d also like to make a few minor changes to the UI, grouping elements that belong together and rewriting events and callbacks to use bindings to further clean up the code.

        Like

      1. tl;dr:
        Try to convert a short Video (30 secs)

        I had the same error. I did train with CPU, because of an AMD card. When i tried to create a Video i got an error. I tried then with a short sample (about 30 secs from the original video) and then it worked. Looked for me like it could convert only short videos. I think the problem is the specs of my hardware. Are you and ALEXNAUTILUS also did train with CPU and have an AMD card?

        Like

    1. I had the same problem “Could not create frames!”. The solution for me was when selecting the input video; use the 3 dotted line button to select the file. It will give you an output folder automatically (I didn’t change it). I ran the program as normal (which included TensorFlow avx error and can’t find alignment.json) and it worked. Hope this helps.

      Like

  11. hi dev, planning to make a video tutorial for this app,
    do you accept donations ?if yes pls provide them so I can show that in the video?

    Like

  12. Hi friends! Nice work and many thanx! A question … i managed to install everything correct and tryied training on gpu but i saw that the graphic card says capability Cuda 2.2 started the training but everything stuck! Is this means that im not able to use the app?

    Like

  13. How can I tell if the training is making progress? I have a few errors and messages and I don’t understand or don’t know if I need to fix something. Here are some of the lines I see:

    – Loading Model from Model_Original plugin…
    C:\Program Files\Radek’s Issues\MyFakeApp\lib\python-CPU\lib\site-packages\h5py\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
    from ._conv import register_converters as _register_converters

    – Unable to open file (unable to open file: name = ‘C:\MyFakeApp\IMG_0184\model\encoder.h5’, errno = 2, error message = ‘No such file or directory’, flags = 0, o_flags = 0)

    – 2018-03-06 15:56:44.965669: I C:\tf_jenkins\workspace\rel-win\M\windows\PY\36\tensorflow\core\platform\cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2
    saved model weights loss_A: 0.15507, loss_B: 0.20131

    last line is: [19:02:37] [#00094] loss_A: 0.11923, loss_B: 0.11339

    I’ve verified the file path and the encoder.h5 file is at that location. I want to make sure it is running fine or if I need to change some settings or all I need to do is just let it keep running.
    I’m using a Windows 10 AMD A10-9600P RadeonR5 2.4GHz 12GB RAM

    Like

  14. Anyone have an idea as to why I can’t get the Convert Video to work? Today I just finished training but when I tried to actually merge the video is just a black screen with the OG audio playing. Here’s what I’ve got entered in:

    Input video: The video I want to use the face swap on
    OutPut video: Folder where I want my final product to go
    Model directory: Set to the same place it was in training
    Trainer: Original
    Converter: HOG

    I didn’t touch any other settings because I don’t have any idea what they do (Computer Noob) so it’s all boxes unchecked and Mask Type is defaulted to Rect

    I also tried setting the FPS from 25 to 29.97 as that’s the framerate of my input video but still black screen-only audio.

    Sucks because I’ve been running the trainer for 3 days straight and now 0 payoff xD

    Like

  15. With initial setup from msi repo, I’m getting around 1mn20sc per batch. Tensorflow backend warns me that my CPU can handle AVX2 instructions

    Ok so I’ve found tensor flow build with AVX2 from here:
    https://github.com/fo40225/tensorflow-windows-wheel (whl files are just zip file)
    => 1.6.0\py36\CPU\avx2 VS2017 15.4 AVX2 Python 3.6

    But after moving to that build (overwritten in myFakeApp folders), I see no change at all regarding performance: Still roughly 1mn20 per batch. However, I don’t get the warning regarding CPU capabilities anymore. So new build is obviously active.

    I didn’t expect so much a vaste improvement, but not even a little change makes me puzzled.

    Maybe a switch to activate AVX2 while invoking tensorflow API ?. Any idea ?

    Like

  16. Can someone help me out here? i got a new graphics card ( Gigabyte geforce GTX 1060)
    Now i am not able to train- please take a look : Thanks!
    Using live preview
    Loading Model from Model_Original plugin…
    C:\Program Files\Radek’s Issues\MyFakeApp\lib\python-GPU\lib\site-packages\h5py\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
    from ._conv import register_converters as _register_converters
    Using TensorFlow backend.
    Failed loading existing training data.
    Unable to open file (unable to open file: name = ‘C:\face 3\face from\face2\model\encoder.h5’, errno = 2, error message = ‘No such file or directory’, flags = 0, o_flags = 0)
    Loading Trainer from Model_Original plugin…
    Starting. Press “Enter” to stop training and save model
    2018-03-18 21:38:50.763592: I C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\platform\cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2
    2018-03-18 21:38:51.102138: I C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\gpu\gpu_device.cc:1105] Found device 0 with properties:
    name: GeForce GTX 1060 6GB major: 6 minor: 1 memoryClockRate(GHz): 1.7715
    pciBusID: 0000:02:00.0
    totalMemory: 6.00GiB freeMemory: 4.96GiB
    2018-03-18 21:38:51.107348: I C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\gpu\gpu_device.cc:1195] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:02:00.0, compute capability: 6.1)
    2018-03-18 21:38:54.275106: E C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\stream_executor\cuda\cuda_dnn.cc:378] Loaded runtime CuDNN library: 7101 (compatibility version 7100) but source was compiled with 7003 (compatibility version 7000). If using a binary install, upgrade your CuDNN library to match. If building from sources, make sure the library loaded at runtime matches a compatible version specified during compile configuration.
    2018-03-18 21:38:54.281964: F C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\kernels\conv_ops.cc:717] Check failed: stream->parent()->GetConvolveAlgorithms( conv_parameters.ShouldIncludeWinogradNonfusedAlgo(), &algorithms)

    Like

    1. New to this too, my 2 cents:

      > Unable to open file (unable to open file: name = ‘C:\face 3\face from\face2\model
      >\encoder.h5’, errno = 2, error message = ‘No such file or directory’, flags = 0, o_flags = 0)

      Avoid space in your directory path (ie => C:\face3\face from\face2\model)

      >Loaded runtime CuDNN library: 7101 (compatibility version 7100) but source was >compiled with 7003 (compatibility version 7000). If using a binary install, upgrade your >CuDNN library to match

      I guess you don’t have installed the CuDNN version that matches the tenserflow build. Try installing CuDNN 7003, or rebuild tensorflow

      I’m about to buy a GTX 1060. Please let me know your progress

      Like

  17. Hi,I have a question.It is possible to use Colaboratory to launch myfakeapp script?
    Colaboratory is a Google research project created to help disseminate machine learning education and research. It’s a Jupyter notebook environment that requires no setup to use and runs entirely in the cloud.
    Colaboratory notebooks are stored in Google Drive and can be shared just as you would with Google Docs or Sheets. Colaboratory is free to use.
    Python 2 and Python 3 and GPU support(Colab now supports running TensorFlow computations on a GPU. Simply select “GPU” in the Accelerator drop-down in Notebook Settings (either through the Edit menu or the command palette at cmd/ctrl-shift-P).

    . https://colab.research.google.com/notebooks/welcome.ipynb

    Give me feedback please

    Like

  18. I have a 970m in my gaming laptop with 4 gigz VRAM. First it made me install cuda *installed 9.1 and had to go back to 9.0 after error) then it made me install cudnn which i did as instructed, and then now it just throws out another random error and python starts working whenever i try to use CUDA.
    Also anyone can please upgrade this build and release some new features?

    Also, is this supposed to work better at replacing face sin videos btw? Like what exactly is expected to improve?
    Or is there a better build out there?

    Like

  19. i got tgus message ” The cabinet file ‘disk1.cab required for this installation is corrupt and cannot be used. This cpuld indicate a network error, an error reading from cd-rom, or problem with this package”
    how to fix this

    Like

  20. Well, I had the same problem, that after training I could not start to convert the video! Because the first cmd screen was visible for less than seconds I could not see, what the problem was – so I started a screen record and looked picture for picture until I could see, what the problem was. It was so simpel. My video had no audio stream – just a video stream. If you add an audio stream to your video everything works fine. Maybe this tip helps you, too.

    Like

  21. When trying to train on GPU I get the error

    >ImportError: Could not find ‘nvcuda.dll’. Tensorflow requires that this DLL be installed in a directory that is named in your %PATH% environment variable. Typically it is installed in ‘C:\Windows\System32’. If it is not present ensure that you have a CUDA-capable GPU with correct driver intalled

    I did install everything and my GPU is freshly updated.

    When trying to train on CPU it says

    >Your CPU supports instructions that htis TensorFlow binary was not compiled to use: AVX
    >cannot reshape array of size 589824 into shape (4, 7, 3, 64, 64, 3)

    If you can help provide any answers as to why I’m receiving these messages it would be greatly appreciated

    Like

  22. Can you please help me get this start working, the last Time I had this training process working fine even I wasn’t installed cuda 9 or 8 version by that time, but now everything Is working fine, I installed Cuda 9.0 and latest graphics driver plus, fakeapp version software runs fine, but I can’t get this working with your software, when I click train, It has stopped working the error says. Thanks In advance

    Like

  23. So has anyone successfully created videos yet? I am training models, but when I convert video, I get a black screen with audio. If is says it cannot detect face, does it set that frame to black or does it just leave the original clip without any changes?
    Like I said, I have audio, but the screen is all black. I think it’s a problem with the settings i’m using (I may not fully understand how to set it up)
    Using AMD with Radeon graphics. I get the monster faces when it’s training. so i’m pretty sure it’s building models properly. I just want to get a test video so make sure I got the process down. Don’t really care about the quality currently.

    Like

  24. Hello! Kindly anyone here would like to help me with this error.. It will be a great help

    Model A Directory: G:\Test\faces
    Model B Directory: G:\Test2\faces
    Training data directory: G:\Test2\model
    Loading data, this may take a while…
    Using live preview
    Loading Model from Model_Original plugin…
    C:\Program Files\Radek’s Issues\MyFakeApp\lib\python-CPU\lib\site-packages\h5py\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
    from ._conv import register_converters as _register_converters
    Using TensorFlow backend.
    Failed loading existing training data.
    Unable to open file (unable to open file: name = ‘G:\Test2\model\encoder.h5’, errno = 2, error message = ‘No such file or directory’, flags = 0, o_flags = 0)
    Loading Trainer from Model_Original plugin…
    Starting. Press “Enter” to stop training and save model

    Like

  25. Exits without warning for me when hitting ‘Train’. Event viewer says the following –

    Application: MyFakeApp.exe
    Framework Version: v4.0.30319
    Description: The process was terminated due to an unhandled exception.
    Exception Info: System.InvalidOperationException

    Is there a particular version of .net framework that is required?

    Many thanks!

    Like

  26. C:\Program Files\Radek’s Issues\MyFakeApp\lib\python-GPU\lib\site-packages\h5py\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
    from ._conv import register_converters as _register_converters
    Using TensorFlow backend.
    Failed loading existing training data.
    Unable to open file (unable to open file: name = ‘C:\fakes\data\encoder.h5’, errno = 2, error message = ‘No such file or directory’, flags = 0, o_flags = 0)
    Loading Trainer from Model_Original plugin…
    Starting. Press “Enter” to stop training and save model
    2018-11-04 00:04:01.066726: I C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\platform\cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX
    2018-11-04 00:04:02.429877: E C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\stream_executor\cuda\cuda_driver.cc:406] failed call to cuInit: CUDA_ERROR_NO_DEVICE
    2018-11-04 00:04:02.628538: I C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\stream_executor\cuda\cuda_diagnostics.cc:158] retrieving CUDA diagnostic information for host: DESKTOP-QBRMH24
    2018-11-04 00:04:02.631453: I C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\stream_executor\cuda\cuda_diagnostics.cc:165] hostname: DESKTOP-QBRMH24

    and starts to train on the processor. Please tell me what to do.

    Like

    1. C:\Program Files\Radek’s Issues\MyFakeApp\lib\python-GPU\lib\site-packages\h5py\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
      from ._conv import register_converters as _register_converters
      Using TensorFlow backend.
      Failed loading existing training data.
      Unable to open file (unable to open file: name = ‘C:\fakes\data\encoder.h5’, errno = 2, error message = ‘No such file or directory’, flags = 0, o_flags = 0)
      Loading Trainer from Model_LowMem plugin…
      Starting. Press “Enter” to stop training and save model
      2018-11-04 00:13:24.003716: I C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\platform\cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX
      2018-11-04 00:13:24.733393: E C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\stream_executor\cuda\cuda_driver.cc:406] failed call to cuInit: CUDA_ERROR_NO_DEVICE
      2018-11-04 00:13:24.768788: I C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\stream_executor\cuda\cuda_diagnostics.cc:158] retrieving CUDA diagnostic information for host: DESKTOP-QBRMH24
      2018-11-04 00:13:24.771988: I C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\stream_executor\cuda\cuda_diagnostics.cc:165] hostname: DESKTOP-QBRMH24

      Like

  27. Hey guys so I’ll get straight to the point.
    It seems that when you finally convert your video with the models, the multimedia app for windows can’t properly show them, so you need to use another program to view it, in my case, I used VLC.
    As for the “Could not create frames” error, it seems to be tied to a specific video file and not a set of configurations or hardware (I’m not even using a CUDA compatible GPU yet I still get that error), therefore I recommend trying to fake a different video altogether, which is what I did. You could also try making a copy of the video with a different name and/or format (however i think that MyFakeApp only supports .mp4 files, but you could try with other well-known video formats).

    tl;dr
    open video with vlc and try a different video

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s