KrISS feed 8.7 - A simple and smart (or stupid) feed reader. By Tontof
  • Sunday 20 September 2015 - 14:01

    When you issue git tag it will show you all tags of a repository sorted in alphabetical order. But actually it makes much more sense to see tags sorted by tagging date. Unfortunately there is currently no such git sub command that accomplishes this easily. So we are going to write our own:

    Article series
    Git like a pro

    1. Git like a pro
    2. Sort git tags by date
    3. Rewrite author history

    1. The Basics

    The most simple form to sort tags by date is shown below:

    git for-each-ref --sort=taggerdate --format '%(tag)'
    ...
    v1.51
    v1.52
    v1.52.1
    v1.53
    v1.54
    

    But you could also display some more information instead of just the tag itself.

    2. Verbose Output

    As you have probably guessed it already, the parameter –format is responsible to extend the information. For a full list of all possible values, have a look at man git-for-each-ref.

    For now, we are using: the tag name, the tagging date, the name of the tagger and the tag message:

    git for-each-ref --sort=taggerdate --format '%(tag) %(taggerdate:raw) %(taggername) %(subject)' refs/tags
    ...
    v1.51 1438592208 +0200 FirstName LastName Release v1.51
    v1.52 1439215948 +0200 Jane Doe Release v1.52
    v1.52.1 1439907306 +0200 John Doe Release v1.52.1
    v1.53 1440673885 +0200 Cytopia Release v1.53
    v1.54 1442223780 +0200 Cytopia Release v1.54
    

    We now have some more information, but it is not very cleary arranged.

    3. Prettify

    Before we can start to apply our command line-fu on the above output, we will set a clear goal:

    • Make each column aligned vertically
    • Show a human readable date

    For the vertical alignment there are also a few problems that might arise:

    • The tagname can have a variable length
    • The tagger name is also separated by white spaces

    Another thing is that the the output separation can currently not be done via whitespace as the tagger name can have multiple words separated by spaces itself.

    So the first thing is to separate everything else by something else than space which is sort of unique:

    git for-each-ref --sort=taggerdate --format '%(tag)_,,,_%(taggerdate:raw)_,,,_%(taggername)_,,,_%(subject)' refs/tags
    ...
    v1.51_,,,_1438592208 +0200_,,,_FirstName LastName_,,,_Release v1.51
    v1.52_,,,_1439215948 +0200_,,,_Jane Doe Plocke_,,,_Release v1.52
    v1.52.1_,,,_1439907306 +0200_,,,_John Doe_,,,_Hotfix Release v1.52.1
    v1.53_,,,_1440673885 +0200_,,,_Cytopia_,,,_Release v1.53
    v1.54_,,,_1442223780 +0200_,,,_Cytopia_,,,_Release v1.54
    

    Looks more machine readable. And now we can apply some awk magic on it:

    git for-each-ref --sort=taggerdate --format '%(tag)_,,,_%(taggerdate:raw)_,,,_%(taggername)_,,,_%(subject)' refs/tags \
      | awk 'BEGIN { FS = "_,,,_"  } ; { printf "%-20s %-18s %-25s %s\n", $2, $1, $4, $3  }'
    ...
    1438592208 +0200     v1.51              Release v1.51             FirstName LastName
    1439215948 +0200     v1.52              Release v1.52             Jane Doe
    1439907306 +0200     v1.52.1            Hotfix Release v1.52.1    John Doe
    1440673885 +0200     v1.54              Release v1.53             Cytopia
    1442223780 +0200     v1.55              Release v1.54             Cytopia
    

    So what does it do?

    with FS we are setting the field separator for awk to the one we have added to the –format section in git.

    'BEGIN { FS = "_,,,_"  }'...
    

    No we re-order and print the columns. The printf command applies proper spacings between the actual columns. Feel free to adjust them as desired.

    ... '{ printf "%-20s %-18s %-25s %s\n", $2, $1, $4, $3  }'
    

    The last thing that is missing is to get a nice readable date.

    4. Format Date

    awk has a toolkit to convert a timestamp to a readable date: strftime.

    git for-each-ref --sort=taggerdate --format '%(tag)_,,,_%(taggerdate:raw)_,,,_%(taggername)_,,,_%(subject)' refs/tags \
      | awk 'BEGIN { FS = "_,,,_"  } ; { t=strftime("%Y-%m-%d  %H:%M",$2); printf "%-20s %-18s %-25s %s\n", t, $1, $4, $3  }'
    ...
    2015-08-03  10:56     v1.51              Release v1.51             FirstName LastName
    2015-08-10  16:12     v1.52              Release v1.52             Jane Doe
    2015-08-18  16:15     v1.52.1            Hotfix Release v1.52.1    John Doe
    2015-08-27  13:11     v1.54              Release v1.53             Cytopia
    2015-09-14  11:43     v1.55              Release v1.54             Cytopia
    

    So what does it do?

    We first set a new variable t and assign it with a formatted date value from column 2.

    ... '{ t=strftime("%Y-%m-%d  %H:%M",$2);'...
    

    Note that in the printf part we are using the variable t again to show up as our first column.

    ... 'printf "%-20s %-18s %-25s %s\n", t, $1, $4, $3 }'
    

    Te output looks much better now. The only problem I see is that I don’t want to enter such a long command everytime I want to have a quick look at the tags of a repository. So this has to go into the global gitconfig.

    5. Gitconfig Alias

    Inside your ~/.gitconfig create a section [alias] and paste the following command.
    Note that there is some escaping inside awk.

    [alias]
        # Show tags sorted by date
        tags = !"git for-each-ref \
            --sort=taggerdate \
            --format '%(tag)_,,,_%(taggerdate:raw)_,,,_%(taggername)_,,,_%(subject)' refs/tags \
            | awk 'BEGIN { FS = \"_,,,_\"  } ; { t=strftime(\"%Y-%m-%d  %H:%M\",$2); printf \"%-20s %-18s %-25s %s\\n\", t, $1, $4, $3  }'"
    

    From now on we are able to just issue git tags inside a repository

    git tags
    ...
    2015-08-03  10:56     v1.51              Release v1.51             FirstName LastName
    2015-08-10  16:12     v1.52              Release v1.52             Jane Doe
    2015-08-18  16:15     v1.52.1            Hotfix Release v1.52.1    John Doe
    2015-08-27  13:11     v1.54              Release v1.53             Cytopia
    2015-09-14  11:43     v1.55              Release v1.54             Cytopia
    

    6. Summary

    So here is the final command:

    git for-each-ref --sort=taggerdate --format '%(tag)_,,,_%(taggerdate:raw)_,,,_%(taggername)_,,,_%(subject)' refs/tags \
      | awk 'BEGIN { FS = "_,,,_"  } ; { t=strftime("%Y-%m-%d  %H:%M",$2); printf "%-20s %-18s %-25s %s\n", t, $1, $4, $3  }'
    

    And also what you need to add to your gitconfig:

    tags = !"git for-each-ref \
        --sort=taggerdate \
        --format '%(tag)_,,,_%(taggerdate:raw)_,,,_%(taggername)_,,,_%(subject)' refs/tags \
        | awk 'BEGIN { FS = \"_,,,_\"  } ; { t=strftime(\"%Y-%m-%d  %H:%M\",$2); printf \"%-20s %-18s %-25s %s\\n\", t, $1, $4, $3  }'"
    

    If anybody knows a simpler or more elegant way to achieve this, let me know. I am always looking for better git aliases.

    The post Git like a pro: sort git tags by date appeared first on Everything CLI.

  • Wednesday 23 September 2015 - 01:18

    How to configure ranger image preview on OSX with iTerm2? Ranger’s image preview in iTerm2 does not work out of the box and you will need some additional scripts and config settings to get it working. Here you can see step by step how it gets done.

    TL;DR

    Install iTerm >= 2.9 and…

    # Install homebrew
    command -v brew > /dev/null 2&>1 || ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
    
    # Install ranger
    brew install ranger
    
    # Add imgcat to ~/bin
    test -d $HOME/bin || mkdir $HOME/bin
    wget -O ~/bin/imgcat https://raw.githubusercontent.com/gnachman/iTerm2/master/tests/imgcat
    
    # Ranger init
    ranger --copy-config=all
    
    # Ranger config
    sed -e "s/set\spreview_images\s.*$/set preview_images true/" ~/.config/ranger/rc.conf > ~/.tmp.tmp \
        && mv ~/.tmp.tmp ~/.config/ranger/rc.conf && rm ~/.tmp.tmp
    sed -e "s/set\spreview_images_method.*$/set preview_images_method iterm3/" ~/.config/ranger/rc.conf > ~/.tmp.tmp \
        && mv ~/.tmp.tmp ~/.config/ranger/rc.conf && rm ~/.tmp.tmp
    

    Outline

    1. Requirements
    2. Installation
      1. iTerm
      2. Ranger
      3. imgcat
      4. Dependencies
    3. Configuration
    4. Known Problems
      1. Image display
      2. tmux
    5. Ranger in action
    6. Further readings

    1. Requirements

    You need to make sure to meet the following requirements.

    As of now, the stable release of iTerm2 does not support image preview so you will have to download the test version here.

    Also note to get the full range of previews in ranger you will need the following optional tools:

    • file for determining file types
    • The python module chardet, in case of encoding detection problems
    • sudo to use the “run as root”-feature
    • img2txt (from caca-utils) for previewing images in ASCII-art
    • highlight for syntax highlighting of code
    • atool for previews of archives
    • lynx, w3m or elinks for previews of html pages
    • pdftotext for pdf previews
    • transmission-show for viewing bit-torrent information
    • mediainfo or exiftool for viewing information about media files

    2. Installation

    2.1 iTerm

    • Download iTerm2 test version here.
    • Install

    2.2 Ranger

    On OSX most cli applications can be installed using homebrew. So we first have to install it and then simply use it to install ranger itself.

    # Install homebrew
    ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
    
    # Install ranger
    brew install ranger
    

    2.3 imgcat

    Download the latest copy of imgcat from github and put it into a bin directory which is inside your $PATH variable. (What is PATH)

    If you have a ~/bin directory, just put it there and it will work for your user only, otherwise put it into /usr/local/bin for all users.

    # Download it to ~/bin
    wget -O ~/bin/imgcat https://raw.githubusercontent.com/gnachman/iTerm2/master/tests/imgcat
    
    # Download it to /usr/local/bin (requires sudo)
    sudo wget -O /usr/local/bin/imgcat https://raw.githubusercontent.com/gnachman/iTerm2/master/tests/imgcat
    

    2.4 Dependencies

    For all ranger previews you can optionally install the above listed dependencies via homebrew:

    brew install libcaca highlight atool lynx w3m elinks poppler transmission mediainfo exiftool
    

    3. Configuration

    If you are using ranger for the first time, generate the ranger config files.

    ranger --copy-config=all
    

    Now you can go to ~/.config/ranger and see what files have been generated:

    cd ~/.config/ranger
    ls -la
    
    total 88
    drwxr-xr-x 7 cytopia staff   238 Sep 23 01:02 ./
    drwxr-xr-x 4 cytopia staff   136 Sep 23 01:02 ../
    -rw-r--r-- 1 cytopia staff  2624 Sep 23 01:02 commands.py
    -rw-r--r-- 1 cytopia staff 46073 Sep 23 01:02 commands_full.py
    -rw-r--r-- 1 cytopia staff 18399 Sep 23 01:02 rc.conf
    -rw-r--r-- 1 cytopia staff  9346 Sep 23 01:02 rifle.conf
    -rwxr-xr-x 1 cytopia staff  3822 Sep 23 01:02 scope.sh
    

    Let’s go over them quickly:

    • commands.py: Commands which are launched with :
    • commands_full.py: Full set of commands
    • rc.conf: Configuration and keybindings
    • rifle.conf: File associations (which program to use for opening files)
    • scope.sh: Responsible for various filet previews

    Currently the only important file to us is rc.conf. Open it in your favorite editor and change the following two lines to look like this:

    # Use one of the supported image preview protocols
    set preview_images true
    
    # Set the preview image method. Supported methods:
    #
    # * w3m (default):
    #   Preview images in full color with the external command "w3mimgpreview"?
    #   This requires the console web browser "w3m" and a supported terminal.
    #   It has been successfully tested with "xterm" and "urxvt" without tmux.
    #
    # * iterm2:
    #   Preview images in full color using iTerm2 image previews
    #   (http://iterm2.com/images.html). This requires using iTerm2 compiled
    #   with image preview support.
    set preview_images_method iterm2
    

    Now you are all set and can enjoy to have ranger image preview on iTerm2.

    4. Known Problems

    4.1 Image display

    When the images do not show up correctly you will need to alter rc.conf and set draw_borders to true:

    vim ~/.config/ranger/rc.conf
    
    # Draw borders around columns?
    set draw_borders true
    

    4.2 tmux

    The problem I have encountered and haven’t solved so far is that this does not work inside a tmux session. So if anybody knows how to make ranger preview images with iTerm2 inside tmux, please let me know.

    5. Ranger in action

    Here are a few slides to see ranger in action:

    ranger

    6. Further readings

    If you want to know how to integrate ranger into vim as a file explorer with all its cool features, read: Use ranger as a file explorer in vim.

    _

    The post Ranger image preview on OSX with iTerm2 appeared first on Everything CLI.

  • Saturday 10 October 2015 - 22:18

    In this little post I am going to show you why you sometimes need to rewrite the author or committer history, how you do it and where it will not work as expected.

    Article series
    Git like a pro

    1. Git like a pro
    2. Sort git tags by date
    3. Rewrite author history

    TL;DR

    The complete source including command-generation can be found at github:
    cytopia/git-rewrite-author cytopia

    Latest Stable Version

    Why would I rewrite the git author history?

    First things first, why would you ever want to rewrite the git author history? Think of the following scenario:

    You are going to do a very urgent fix directly on the server deployed git repository. It seems to work fine and you commit it to the local repository and push it to remote. But wait… You totally forgot that you acted as root and when you issue a git log it will show the following:

    commit a0925c315107fcdcfb7a3b2dcd435995c8216ad2
    Author: root <root@localhost>
    Date:   Sat Oct 10 21:39:43 2015 +0200
    
        fixed xss on index.php
    

    Damn’ that was a mistake. You should have taken the time and have done it on your local machine, pushed to remote and checked out on the server as your seniors always told you. Don’t worry, this can be fixed.

    Rewriting the author history

    This time you better do it locally. Make sure your local repository is up to date with the remote and quickly list all git authors:

    $ git log --pretty=full | grep  -E '(Author|Commit): (.*)$' | sed 's/Author: //g' | sed 's/Commit: //g' | sort -u
    
    Junior Dev <junior.dev@yourcompany.tld>
    Senior Dev <senior.dev@yourcompany.tld>
    root <root@localhost>
    

    OK, you need to get rid of root and rewrite it to your git account Junior Dev. The command for this case will look like this:

    $ git filter-branch --env-filter '
        if [ "$GIT_COMMITTER_EMAIL" = "root@localhost" ]; then
            export GIT_COMMITTER_NAME="Junior Dev"
            export GIT_COMMITTER_EMAIL="junior.dev@yourcompany.tld"
        fi
        if [ "$GIT_AUTHOR_EMAIL" = "root@localhost" ]; then
            export GIT_AUTHOR_NAME="Junior Dev"
            export GIT_AUTHOR_EMAIL="junior.dev@yourcompany.tld"
        fi
    ' --tag-name-filter cat -f -- --all
    

    OK, you have done it, let’s double check by listing the authors again:

    $ git log --pretty=full | grep  -E '(Author|Commit): (.*)$' | sed 's/Author: //g' | sed 's/Commit: //g' | sort -u
    
    Junior Dev <junior.dev@yourcompany.tld>
    Senior Dev <senior.dev@yourcompany.tld>
    

    OK, its gone. Now get it to the remote.

    Rewriting the remote author history

    Once you have rewritten the local git author history, you will only have to make a forced push of all refs including tags:

    $ git push --force --tags origin 'refs/heads/*'
    

    That’s it.

    Where does it come short?

    Be aware that if any of the commits or tags of an author you want to change are signed, it will mess up the commit message in the history. It will take the gpg signature of the commit and prepend it to the commit message and will look something like this:

    commit 4ac785bf03d4b4f814fa9d139db9b9c7b53df733
    Author: cytopia <cytopia@everythingcli.org>
    Date:   Fri Jul 10 16:03:09 2015 +0200
    
         iQIcBAABCgAGBQJVn9CdAAoJEKrf9Tsovxef+7AP/0rHHruS1o1P28iHPJS7JEuw
         FKg1a2MG8DeqwfA93O+chkyl2KfvPEG6yrf369XLAdgR4WkmpCDvK9qYNBxde8NZ
         X5S7rGVMKvgdH031o7hYExhMKo1QJrXWzCa7KoEas0/SIAKaqWHojBEpQJO9nBw/
         60XMpwxo7MY+Y4a/W603yL3ZgPYgHFHun3pb5sxlFyL4uhdPJXPhPUhAkXdlI4TE
         vGt/DwJJ2puDfaQsiOe3Iz6VEYk5s5L22OcYZjBnkgeM14Db9FHAK/KGVgJg0Md0
         64HRSyNDCO92CEwCMdv7cqP8cnm2zsk0TvEYpvp7zz1rWkvXK2VAiMfSy8RrUTP8
         rE6qDwL7mF27aFAPuVohtddwYMKdcDijffQ/hT7tWeW8zKiI8NcbSENqJ6/SQbqP
         mn2WC3vyl2X33WCF3j01BPzq+02B2gMxTr0vnnxewpVDgWKPYGxhxRHXcapo9dva
         t8nC6jOXktzHSffWGYFVX/nK1iVoeIrGEPi4n5w3tMX5+HgZUNQCltXRTuUCGCot
         LYGMPCRRbSJ8i5KarRHxvxQp5YisQe/pED3NZMg0gEtD2jx9P4Yv7tZwt7CQYnwW
         5S336fVLkpJfinrwoaebRgdzd6/rbjsBPd92XPQTYSYG4q5m2EJ6JoaorUCE1H0U
         0svoJ7gt7KVVc4urtCiH
         =o0wB
         -----END PGP SIGNATURE-----
    
        Redirect Errors and Warnings to stderr
    

    The rewriting has created an unsigned multiline commit of the previously signed single-line commit.

    Keep that in mind before going to rewrite signed commits.

    _

    The post git like a pro: rewrite author history appeared first on Everything CLI.

  • Saturday 17 October 2015 - 12:52

    I am always curious about other people’s vim workflow, especially when it comes to project management and goto definitions with ctags. I have now used vim quite some time and want to share my personal workflow. This is about how to create custom local vim configuration files per project and how to manage all of your ctag files easily.

    TL;DR

    So basically what I do is:

    • Go to my project root
    • Create vim project file
    • Create ctag files
    • Start coding

    All done automatically with a one-liner:

    $ make-vim-project all
    

    1. The whole story

    1.1 Install Dependencies

    As I am currently using a MacBook due to work, I have to deal with OSX and therefore of course use homebrew to install my stuff. So first I need to get the exuberant version of ctags.

    brew install ctags-exuberant
    

    1.2 Vim and ctags

    I am trying to separate programming languages into different ctag files. For example, one file for c/c++, one file for shell-scripts, one file for javascript and so on. For this to work, I need to tell vim where to look for the files. The vim section looks like this:

    """"""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
    " CTAGS/CSCOPE
    """"""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
    " Default/Generic tag file
    set tags=tags,.tags
    
    " Filetype specific tag files (This is used for global IDE tags)
    autocmd FileType c              set tags=.tags_cpp,$HOME/.vim/tags/cpp
    autocmd FileType cpp            set tags=.tags_cpp,$HOME/.vim/tags/cpp
    autocmd FileType css            set tags=.tags_css,$HOME/.vim/tags/css
    autocmd FileType java           set tags=.tags_java,$HOME/.vim/tags/java
    autocmd FileType javascript     set tags=.tags_js,$HOME/.vim/tags/js
    autocmd FileType html           set tags=.tags_html,$HOME/.vim/tags/html
    autocmd FileType php            set tags=.tags_php,$HOME/.vim/tags/php
    autocmd FileType sh             set tags=.tags_sh,$HOME/.vim/tags/sh
    

    1.3 Vim and project files

    Now I need a way to always tell vim where my project root is in order for it to look for the project specific ctag files. For this I am using local_vimrc via NeoBundle. Here is how to get it into vim.

    " ---- PROJECT vimrc
    NeoBundle 'LucHermitte/lh-vim-lib', {
    \   'name': 'lh-vim-lib'
    \}
    NeoBundle 'LucHermitte/local_vimrc', {
    \   'depends': 'lh-vim-lib'
    \}
    

    This plugin will check the root directory for a file called _vimrc_local.vim. The only thing I want to place into this file is the cd path, so it know the root of the project directory.:

    $ cat /path/to/project/_vimrc_local.vim
    
    :cd /path/to/project
    

    Whenever I open vim from within this project path, it will check if there a ctag files as defined in vim and ctags above.

    1.4 Creating project and ctag files

    The setup is almost complete and I just need to create the project and ctag files for every project in its root. So first creating the project file:

    $ cd /path/to/project && echo ":cd $(pwd)" > _vimrc_local.vim
    

    And then I will add the ctag files. Here is an example for a c/c++ project:

    $ ctags -R -f .tags_cpp \
        --file-scope=yes \
        --sort=yes \
        --c++-kinds=+p \
        --fields=+iaS \
        --extra=+q \
        2>/dev/null
    

    This kind of sucks as I don’t want to issue those long commands every time I create a new project or update my ctags. So it needs to be automated or at least simplified.

    1.5 Using a bash functions for project and ctag files

    On the most simple form I just want to issue a single command which does everything for me. So I wrote a bash function make-vim-project:

    $ make-vim-project
    Usage: make-vim-project <type>
    
    all     Create ctags for every filetype
    web     Create ctags for php, js, css and html
    cpp     Create ctags for c/c++
    shell   Create ctags for bash/sh
    

    Now I can create a c/c++ project easily by just typing this:

    $ make-vim-project cpp
    

    It will automatically create the _vimrc_local.vim as shown above and all c/c++ relevant ctag files. I also use this command once I update my project. So how does the function look and where do I put it?

    First, it can be put anywhere in .bash_profile, .bashrc or any other custom bash file that is sourced by the main bash configuration file. Let’s have a look at the function itself:

    #------------------------------------------------------
    #-------- Vim Project
    make-vim-project() {
        local name dir
    
        name="_vimrc_local.vim"
        dir="$(pwd)"
    
        read -r -d '' USAGE <<-'EOF'
    Usage: make-vim-project <type>
    
    all     Create ctags for every filetype
    web     Create ctags for php, js, css and html
    cpp     Create ctags for c/c++
    shell   Create ctags for bash/sh
    EOF
    
        if [ $# -ne 1 ]; then
            echo "$USAGE"
            return
        fi
    
        # CTAGS
        echo "Building ctags"
        if [ "$1" == "all"  ]; then
            make-ctags
            make-ctags-css
            make-ctags-js
            make-ctags-html
            make-ctags-php
            make-ctags-sql
            make-ctags-shell
            make-ctags-cpp
        elif [ "$1" == "web" ]; then
            make-ctags-php
            make-ctags-html
            make-ctags-js
            make-ctags-css
            make-ctags-sql
        elif [ "$1" == "cpp" ]; then
            make-ctags-cpp
        elif [ "$1" == "shell" ]; then
            make-ctags-shell
        else
            echo "$USAGE"
            return
        fi
    
        # Vimrc
        echo "Creating local vimrc"
        echo ":cd ${dir}" >> "${name}"
    }
    

    As you can see, the function just prints its usage, calls other make-ctags-* functions and creates the _vimrc_local.vim file. Have a look at the gist for the complete source of all other make-ctags-* functions:

    cytopia/create-vim-project cytopia

    Just for clarification, here is how one of the ctag functions will look:

    make-ctags-cpp() {
        ctags -R -f .tags_cpp \
            --file-scope=yes \
            --sort=yes \
            --c++-kinds=+p \
            --fields=+iaS \
            --extra=+q \
            2>/dev/null
    }
    

    1.6 Project root

    Let’s have a look what files are inside my project root after using make-vim-project all:

    $ ls -la
    ...
    -rw-r--r--  1 cytopia 1286676289 73381097 Oct 17 12:01 .tags
    -rw-r--r--  1 cytopia 1286676289 72893221 Oct 17 12:02 .tags_cpp
    -rw-r--r--  1 cytopia 1286676289  1776509 Oct 17 12:01 .tags_css
    -rw-r--r--  1 cytopia 1286676289   409973 Oct 17 12:01 .tags_html
    -rw-r--r--  1 cytopia 1286676289 64329626 Oct 17 12:01 .tags_js
    -rw-r--r--  1 cytopia 1286676289  8989441 Oct 17 12:01 .tags_php
    -rw-r--r--  1 cytopia 1286676289     6223 Oct 17 12:01 .tags_sh
    -rw-r--r--  1 cytopia 1286676289    52748 Oct 17 12:01 .tags_sql
    -rw-r--r--  1 cytopia 1286676289       32 Oct 17 12:02 _vimrc_local.vim
    

    2. What next?

    This workflow has evolved during over a year of vim experience and as my personal preference. I am still not quite satisfied with some manual work, especially for updating the ctags once you have added code. If any of you have some better workflows and/or can recommend other vim plugins that do the trick more automated, please let me know and share.

    _

    The post vim workflow: go to definition with ctags appeared first on Everything CLI.

  • Thursday 10 December 2015 - 01:02

    So why would you monitor a drupal site?

    From a System Engineers point of view, Drupal itself is nothing else than a set of software that requires updates on a regular base. So what do you do? You treat it that way and update it whenever updates are needed and patch it as soon as security updates are available. The more tricky part is to always get notified immediately when security updates are required so you can patch all of your sites before shit hits the fan.

    Drupal itself offers a way to notify a site admin about regular updates and/or security updates by sending an email to a specified account. If you are only responsible for a couple of Drupal sites, this will be sufficient. However, if you have to keep track about lots of other stuff and might have chosen Nagios as a central platform for monitoring different servers and services it would be useful to go the same way with Drupal and keep everything in the same location.

    What about the drupal nagios module?

    I know there is a drupal nagios module out there, but this has to be added to drupal itself and activated in the module section. Unfortunately I do not have access to all site’s code and for a few sites I have no permission to add modules to it, so there had to be a different solution.

    DIY if you have to

    I was facing the above stated problems. In my company I use nagios/icinga to keep track of many servers with various services and states. The only remaining relict were numerous Drupal sites that were sending emails from time to time complaining about security updates. This way I had to keep an eye on two different systems. As I am a big fan of consolidation and could not find a suitable plugin for Drupal sites, I had to write my own.

    The thoughts

    As I have already written a few nagios plugins I was quite confident and set my goals high. I wanted the normal update notification as well as the security update only notification. With these two checks I would be on the same height as the Drupal functionality itself, so I was looking deeper into what else can go wrong on a drupal site:
    * Pending database updates
    * Incorrect file permissions on directories
    * Problems with cron
    * Basically everything the drupal status report can complain about

    The next thought was about the technology to use. Am I going to write in PHP and make use of drupal hooks to get all the required information? This idea was quickly refused as I currently have drupal 6 and 7 systems and in the future also drupal 8 ones. So it had to be something that is compatible between all versions.

    The idea came to me when I was checking a page against errors using drush. Drush is an already matured toolset for drupal, so why not just write a wrapper that gives me all desired information in a nagios plugin style output.

    The check_drupal plugin – first draft

    The goals were set, the technology was decided, I was ready to go and this is what I came up with in pure posix compliant bourne shell (not bash):

    check_drupal -d <drupal root> [-n <name>] [-s <w|e>] [-u <w|e>] [-e <w|e>] [-w <w|e>] [-m <w|e>] 
    

    The checks are as follows:
    * -s: security updates
    * -u: normal updates
    * -e: all core errors (what the status report shows as errors)
    * -w: all core warnings (what the status report shows as warnings)
    * -m: missing/pending database updates

    As well as the ability to specify the nagios severity (w warning and e error) for every single check.

    Problems with the first draft

    I was testing it back and forth and everything went smooth. After a few more days of testing and performance optimization in the wrapper script I noticed that the check itself can take up to three seconds to execute on some drupal instances (depending on the server and the database size of drupal).

    The check_drupal plugin – final version

    Nagios itself checks every 5 minutes. 3 seconds for a check that is run every 5 minutes is pretty long, so I had to reconsider in order to not waste too much time on those servers. The other idea was that I do not need to check for problems every five minutes. So, this is what I came up with:

    check_drupal -d <drupal root> [-n <name>] [-s <w|e>] [-u <w|e>] [-e <w|e>] [-w <w|e>] [-m <w|e>] [-l <logfile>]
    check_drupal_log -f <logfile>
    

    There is an additional parameter -l that will log all check results (including nagios exit codes) into a logfile. With this additional option, the check_drupal script can run on the drupal machine via cron and update the logfile every 6 or 12 or XX hours. The second plugin check_drupal_log will actually be triggered by nagios and parse the logfile which only takes milliseconds.

    The choice

    As you notice the -l option is optionally and you can use it either way. Either you just use check_drupal or you can use the combination of both to save some cpu cycles.

    The end

    This is my first contribution to drupal, even though it is not directly related to any drupal modules it is a nice addition. I hope you enjoy it and if you find a bug report it and I am happy to fix it.

    Find the source with install instructions on github:

    cytopia/check_drupal cytopia

    Latest Stable Version

    Update

    Officially added to:
    * Icinga Exchange
    * Nagios Exchange

    _

    The post check_drupal: Monitoring drupal with nagios appeared first on Everything CLI.

  • Saturday 09 January 2016 - 12:42

    Local vs Remote SSH port forwarding

    When it comes to the art of SSH tunnelling, there are basically two options where to relay a port to.

    You can relay a port from a remote server to your local machine with ssh -L, hence called local port forwarding. A very basic use-case is if your remote server has a MySQL database daemon listening on port 3306 and you want to access this daemon from your local computer.

    The second option is to make your local port available on a remote server (ssh -R). Remote port forwarding might come in handy if you for example want to make your local web-server available on a port of a public server, so that someone can quickly check what your local web-server provides without having to deploy it somewhere publicly.

    It should now be pretty easy to remember: Local and remote port forwarding always refers to where to relay the port to. The SSH command syntax uses the same easy to remember abbreviations: -L (forward to my local machine) and -R (forward to my remote machine).

    Article series
    SSH tunnelling for fun and profit

    1. Local vs Remote
    2. Tunnel options
    3. AutoSSH
    4. SSH Config

    TL;DR

    Remote MySQL server (remote port 3306) to local machine on local port 5000:

    ssh -L 5000:localhost:3306 cytopia@everythingcli.org
    

    Local web-server (local port 80) to remote server on remote port 5000:

    ssh -R 5000:localhost:80 cytopia@everythingcli.org
    

    Local port forwarding

    (Make a remote port available locally).

    In this example we are going to make a remote MySQL Server (Port 3306) available on our local computer on port 5000.

    Let’s start with the general syntax of local port forwarding:

    ssh -L <LocalPort>:<RemoteHost>:<RemotePort> sshUser@remoteServer
    
    Argument Explanation
    LocalPort The port on your local machine where the whole thing should be reachable.
    RemoteHost This specifies on which interface inside the remote server (remoteServer) the daemon is listening on. This can be either 127.0.0.1, localhost, a specific IP address or even 0.0.0.0 which refers to all interfaces. If you are unsure, simply ssh into the remote machine and check all interfaces for port 3306 by issuing:
    netstat -an | grep 3306 | grep LISTEN.
    RemotePort This is the actual port on the remote machine (remoteServer) you want to relay to your local machine. In our case (MySQL listens on 3306 by default) it is simply 3306
    sshUser This is the SSH username you have on the remote server
    remoteServer The address (IP or hostname) by which your remote server is reachable via ssh

    Now let’s simply forward our remote MySQL server to our local machine on port 5000.

    ssh -L 5000:localhost:3306 cytopia@everythingcli.org
    

    That’s all the magic! You can now simply reach the remote database from your local machine with mysql --host=127.0.0.1 --port=5000 or any other client.

    But wait… which local address does it listen on?

    Yes, you are right! The complete syntax is:

    ssh -L [<LocalAddress>]:<LocalPort>:<RemoteHost>:<RemotePort> sshUser@remoteServer
    
    Argument Explanation
    LocalAddress The local address is an optional parameter. If you do not specify it, the remote port will be bound locally to all interfaces (0.0.0.0). So you can also only bind it locally to your 127.0.0.1 (on your local machine).

    This is the full example:

    ssh -L 127.0.0.1:5000:localhost:3306 cytopia@everythingcli.org
    

    Remote port forwarding

    (Make a local port available remotely).

    In this example we are going to make our local web-server (Port 80) available on a remote server on Port 5000.

    Let’s start with the general syntax of remote port forwarding:

    ssh -R <RemotePort>:<LocalHost>:<LocalPort> sshUser@remoteServer
    
    Argument Explanation
    RemotePort The port on your remote server (remoteServer) where the whole thing should be reachable.
    LocalHost This specifies on which interface inside your local computer the daemon is listening on. This can be either 127.0.0.1, localhost, a specific IP address or even 0.0.0.0 which refers to all interfaces. If you are unsure, simply check all interfaces (on your local machine) for port 80 by issuing:
    netstat -an | grep 80 | grep LISTEN.
    LocalPort This is the actual port on your local machine you want to relay to the remote server (remoteServer). In our case (The web-server listens on 80 by default) it is simply 80
    sshUser This is the SSH username you have on the remote server
    remoteServer The address (IP or hostname) by which your remote server is reachable via ssh

    Now let’s simply forward our local web-server to our remote machine on port 5000.

    ssh -R 5000:localhost:80 cytopia@everythingcli.org
    

    That’s all the magic! You can now simply reach your local webserver via http://everythingcli.org:5000.

    But wait… which remote address does it listen on?

    Yes, you are right! The complete syntax is:

    ssh -R [<RemoteAddress>]:<RemotePort>:<LocalHost>:<LocalPort> sshUser@remoteServer
    
    Argument Explanation
    RemoteAddress The remote address is an optional parameter. If you do not specify it, the remote port will be bound remotely (on remoteServer) to all interfaces (0.0.0.0). So you can also only bind it remotely to a specific interface.

    This is the full example:

    Assuming the IP address of everythingcli.org is 109.239.48.64 and you only want to bind it to this IP.

    ssh -R 109.239.48.64:5000:localhost:80 cytopia@everythingcli.org
    

    But wait… it doesn’t work
    By default, the listening socket on the server will be bound to the loopback interface only. This may be overridden by specifying RemoteAddress. Specifying a RemoteAddress will only succeed if the server’s GatewayPorts option is enabled (on the remote server):

    $ vim /etc/ssh/sshd_config
    GatewayPorts yes
    

    Some more details

    Ports below 1024

    Every system user can allocate ports above and including 1024 (high ports). Ports below that require root privileges.
    So If you want to relay any port to a port to for example 10, you must do that like so:

    As you allocate a low port on your local machine, you must either do that as root (locally) or with sudo (locally):

    sudo ssh -L 10:localhost:3306 cytopia@everythingcli.org
    

    As you allocate a low port on the remote server, you will need to ssh into the machine as root:

    ssh -R 10:localhost:80 root@everythingcli.org
    

    _

    The post SSH tunnelling for fun and profit: local vs remote appeared first on Everything CLI.

  • Wednesday 13 January 2016 - 14:37

    If you have read the previous article of this series, you should be able to create forward and reverse tunnels with ease. In addition to the previously shown examples I will address some more advanced options for SSH tunnels in general.

    Article series
    SSH tunnelling for fun and profit
    1. Local vs Remote
    2. Tunnel options
    3. AutoSSH
    4. SSH Config

    SSH Login shell

    Remember the following example:

    ssh -L 5000:localhost:3306 cytopia@everythingcli.org
    

    Once you have executed the above command, a tunnel is established. However, you will also be logged in into the remote server with a SSH session. If you simply want to do some port forwarding you will not need or might not even want a remote login session. You can disable it via -N, which is a very common option for SSH tunnels:

    ssh -N -L 5000:localhost:3306 cytopia@everythingcli.org
    

    The -N option is also very useful when you want to create SSH tunnels via cron

    Argument Explanation
    -N After you connect just hang there (you won’t get a shell prompt)
    SSH man: Do not execute a remote command.
    Note: Only works with SSHv2

    So if you are not going to execute remote commands and will not need a login shell, you also do not need to request a pseudo terminal in the first place.

    ssh -T -N -L 5000:localhost:3306 cytopia@everythingcli.org
    
    Argument Explanation
    -T Disable pseudo-terminal allocation.
    This makes it also safe for binary file transfer which might contain escape characters such as ~C.

    SSH tunnel via cron

    Imagine you want to have a SSH tunnel be established (or checked and if it doesn’t run re-opened) via cron every hour. For that to work, SSH must go into background. For that we use -f.

    ssh -f -L 5000:localhost:3306 cytopia@everythingcli.org
    
    Argument Explanation
    -f Requests ssh to go to background just before command execution.

    But hey, if SSH is in the background anyway, we do not need a login shell (-N) and therefore also do not need a tty (-T). So the full command ready for cron would be:

    ssh -f -T -N -L 5000:localhost:3306 cytopia@everythingcli.org
    

    Note: Be aware that this example requires private/public key authentication as cron will not be able to enter passwords.

    SSH tunnel on a non-standard port

    What if the SSH server is listening on a non-standard port (not tcp22). You can always add a port option. Let’s imagine SSH itself is listening on port 1022:

    ssh -T -N -L 5000:localhost:3306 cytopia@everythingcli.org -p 1022
    
    Argument Explanation
    -p Port to connect to on the remote host.

    SSH tunnel with a non standard private key

    Let’s assume you have many different private keys for different servers. If not explicitly specified, SSH will look for a file called ~/.ssh/id_rsa. In this case however, your file is called ~/.ssh/id_rsa-cytopia@everythingcli. So you will also pass this information to the tunnel command.

    ssh -T -N -L 5000:localhost:3306 cytopia@everythingcli.org -i ~/.ssh/id_rsa-cytopia@everythingcli
    

    SSH tunnel via SSH config

    The most complex example from this tutorial is:

    ssh -f -T -N -L 5000:localhost:3306 cytopia@everythingcli.org -p 1022 -i ~/.ssh/id_rsa-cytopia@everythingcli
    

    We all are lazy-ass and don’t want to type the whole thing every time we need a quick tunnel. This is where ~/.ssh/config comes into play.

    Adding user and host

    $ vim ~/.ssh/config
     Host cli
        HostName      everythingcli.org
        User          cytopia
    

    With this, we have created an alias cli for host everythingcli.org with user cytopia. Now our command can be written like this:

    ssh -f -T -N -L 5000:localhost:3306 cli -p 1022 -i ~/.ssh/id_rsa-cytopia@everythingcli
    

    Adding port and identity file

    $ vim ~/.ssh/config
     Host cli
        HostName      everythingcli.org
        User          cytopia
        Port          1022
        IdentityFile  ~/.ssh/id_rsa-cytopia@everythingcli
    

    Now the ssh command looks like this:

    ssh -f -T -N -L 5000:localhost:3306 cli
    

    Adding tunnel config

    In the above example we have a generic configuration for the host everthingcli.org which will work for normal ssh connection as well as for establishing a tunnel. Let’s copy all of the above block under a new alias cli-mysql-tunnel and add the tunnel specific configuration:

    $ vim ~/.ssh/config
     Host cli-mysql-tunnel
        HostName      everythingcli.org
        User          cytopia
        Port          1022
        IdentityFile  ~/.ssh/id_rsa-cytopia@everythingcli
        LocalForward  5000 localhost:3306
    

    Now we can create the tunnel in a much shorter way:

    ssh -f -T -N cli-mysql-tunnel
    

    _

    The post SSH tunnelling for fun and profit: Tunnel options appeared first on Everything CLI.

  • Sunday 17 January 2016 - 23:46

    I just ran into the problem of having to display a PDF file on my TV. Unfortunately there is no built-in feature, which is able to do so. It is only capable of playing movies, so I had to convert the PDF to a mp4 file.

    As I could not find a direct approach, I had to extract all images from the PDF using convert and then concatenate all images into a video stream with ffmpeg.

    PDF to PNG’s

    convert -density 400 input.pdf pic.png
    
    Option Description
    -density 400 Set the horizontal resolution of the image

    This will create one picture for every PDF page with the following naming convention pic-<NUM>.

    PNG’s to MP4

    ffmpeg -r 1/5 -i pic-%02d.png -c:v libx264 -r 30 -pix_fmt yuv420p out.mp4
    
    Option Description
    pic-%02d.png Read all images from the current folder with the prefix pic-, a following number of 2 digits (%02d) and an ending of .png
    -r 1/5 Displays each image for 5 seconds
    r 30 Output framerate of 30 fps.
    -c:v libx264 Output video codec: h264
    pix_fmt yuv420p YUV pixel format

    Scale the Movie

    As the final movie was >4k, my TV wasn’t able to play it, so I in the last step I had to scale it down to an appropriate resolution of 720p:

    ffmpeg -i out.mp4 -vf scale=-1:720  out_720p.mp4
    

    Voila, the final movie plays every page for 5 seconds with a frame rate of 30 frames per second and a horizontal resolution of 720 pixels.

    Reference

    _

    The post Convert PDF to MP4 appeared first on Everything CLI.

  • Wednesday 20 January 2016 - 09:56

    Now that you are able to create various forward or reverse SSH tunnels with lots of options and even simplify your live with ~/.ssh/config you probably also want to know how make a tunnel persistent. By persistent I mean, that it is made sure the tunnel will always run. For example, once your ssh connection times out (By server-side timeout), your tunnel should be re-established automatically.

    I know there are plenty of scripts out there which try to do that somehow. Some scripts use a while loop, others encourage you to run a remote command (such as tail) to make sure you don’t run into timeout and various others. But actually, you don’t want to re-invent the wheel and stick to bullet-proof already existing solutions. So the game-changer here is AutoSSH.

    Article series
    SSH tunnelling for fun and profit
    1. Local vs Remote
    2. Tunnel options
    3. AutoSSH
    4. SSH Config

    TL;DR

    autossh -M 0 -o "ServerAliveInterval 30" -o "ServerAliveCountMax 3" -L 5000:localhost:3306 cytopia@everythingcli.org
    

    or fully configured (via ~/.ssh/config) for background usage

    autossh -M 0 -f -T -N cli-mysql-tunnel
    

    What is AutoSSH

    http://www.harding.motd.ca/autossh/README

    Autossh is a program to start a copy of ssh and monitor it, restarting it as necessary should it die or stop passing traffic.

    Install AutoSSH

    How to install AutoSSH on various systems via their package manager.

    OS Install method
    Debian / Ubuntu $ sudo apt-get install autossh
    CentOS / Fedora / RHEL $ sudo yum install autossh
    ArchLinux $ sudo pacman -S autossh
    FreeBSD # pkg install autossh
    or
    # cd /usr/ports/security/autossh/ && make install clean
    OSX $ brew install autossh

    Alternatively you can also compile and install AutoSSH from source:

    wget http://www.harding.motd.ca/autossh/autossh-1.4e.tgz
    gunzip -c autossh-1.4e.tgz | tar xvf -
    cd autossh-1.4e
    ./configure
    make
    sudo make install
    

    Note: Make sure to grab the latest version which can be found here: http://www.harding.motd.ca/autossh/.

    Basic usage

    usage: autossh [-V] [-M monitor_port[:echo_port]] [-f] [SSH_OPTIONS]
    

    Ignore -M for now. -V simply displays the version and exits. The important part to remember is that -f (run in background) is not passed to the ssh command, but handled by autossh itself. Apart from that you can then use it just like you would use ssh to create any forward or reverse tunnels.

    Let’s take the basic example from part one of this article series (forwarding a remote MySQL port to my local machine on port 5000):

    ssh -L 5000:localhost:3306 cytopia@everythingcli.org
    

    This can simply be turned into an autossh command:

    autossh -L 5000:localhost:3306 cytopia@everythingcli.org
    

    This is basically it. Not much magic here.

    Note 1: Before you use autossh, make sure the connection works as expected by trying it with ssh first.

    Note 2: Make sure you use public/private key authentification instead of password-based authentification when you use -f. This is required for ssh as well as for autossh, simply because in a background run a passphrase cannot be entered interactively.

    AutoSSH and -M (monitoring port)

    With -M AutoSSH will continuously send data back and forth through the pair of monitoring ports in order to keep track of an established connection. If no data is going through anymore, it will restart the connection. The specified monitoring and the port directly above (+1) must be free. The first one is used to send data and the one above to receive data on.

    Unfortunately, this is not too handy, as it must be made sure both ports (the specified one and the one directly above) a free (not used). So in order to overcome this problem, there is a better solution:

    ServerAliveInterval and ServerAliveCountMax – they cause the SSH client to send traffic through the encrypted link to the server. This will keep the connection alive when there is no other activity and also when it does not receive any alive data, it will tell AutoSSH that the connection is broken and AutoSSH will then restart the connection.

    The AutoSSH man page also recommends the second solution:

    -M [:echo_port],

    In many ways this [ServerAliveInterval and ServerAliveCountMax options] may be a better solution than the monitoring port.

    You can disable the built-in AutoSSH monitoring port by giving it a value of 0:

    autossh -M 0
    

    Additionally you will also have to specify values for ServerAliveInterval and ServerAliveCountMax

    autossh -M 0 -o "ServerAliveInterval 30" -o "ServerAliveCountMax 3"
    

    So now the complete tunnel command will look like this:

    autossh -M 0 -o "ServerAliveInterval 30" -o "ServerAliveCountMax 3" -L 5000:localhost:3306 cytopia@everythingcli.org
    
    Option Description
    ServerAliveInterval ServerAliveInterval: number of seconds that the client will wait before sending a null packet to the server (to keep the connection alive).
    Default: 30
    ServerAliveCountMax Sets the number of server alive messages which may be sent without ssh receiving any messages back from the server. If this threshold is reached while server alive messages are being sent, ssh will disconnect from the server, terminating the session.
    Default: 3

    AutoSSH and ~/.ssh/config

    In the previous article we were able to simplify the tunnel command via ~/.ssh/config. Luckily autossh is also aware of this file, so we can still keep our configuration there.

    This was our very customized configuration for ssh tunnels which had custom ports and custom rsa keys:

    $ vim ~/.ssh/config
     Host cli-mysql-tunnel
        HostName      everythingcli.org
        User          cytopia
        Port          1022
        IdentityFile  ~/.ssh/id_rsa-cytopia@everythingcli
        LocalForward  5000 localhost:3306
    

    We can also add the ServerAliveInterval and ServerAliveCountMax options to that file in order to make things even easier.

    $ vim ~/.ssh/config
     Host cli-mysql-tunnel
        HostName      everythingcli.org
        User          cytopia
        Port          1022
        IdentityFile  ~/.ssh/id_rsa-cytopia@everythingcli
        LocalForward  5000 localhost:3306
        ServerAliveInterval 30
        ServerAliveCountMax 3
    

    If you recall all the ssh options we had used already, we can now simply start the autossh tunnel like so:

    autossh -M 0 -f -T -N cli-mysql-tunnel
    

    AutoSSH environment variables

    AutoSSH can also be controlled via a couple of environmental variables. Those are useful if you want to run AutoSSH unattended via cron, using shell scripts or during boot time with the help of systemd services. The most used variable is probably AUTOSSH_GATETIME:

    AUTOSSH_GATETIME
    How long ssh must be up before we consider it a successful connection. Default is 30 seconds. If set to 0, then this behaviour is disabled, and as well, autossh will retry even on failure of first attempt to run ssh.

    Setting AUTOSSH_GATETIME to 0 is most useful when running AutoSSH at boot time.

    All other environmental variables including the once responsible for logging options can be found in the AutoSSH Readme.

    AutoSSH during boot with systemd

    If you want a permanent SSH tunnel already created during boot time, you will (nowadays) have to create a systemd service and enable it. There is however an important thing to note about systemd and AutoSSH: -f (background usage) already implies AUTOSSH_GATETIME=0, however -f is not supported by systemd.

    http://www.freedesktop.org/software/systemd/man/systemd.service.html
    […] running programs in the background using “&”, and other elements of shell syntax are not supported.

    So in the case of systemd we need to make use of AUTOSSH_GATETIME. Let’s look at a very basic service:

    $ vim /etc/systemd/system/autossh-mysql-tunnel.service
    [Unit]
    Description=AutoSSH tunnel service everythingcli MySQL on local port 5000
    After=network.target
    
    [Service]
    Environment="AUTOSSH_GATETIME=0"
    ExecStart=/usr/bin/autossh -M 0 -o "ServerAliveInterval 30" -o "ServerAliveCountMax 3" -NL 5000:localhost:3306 cytopia@everythingcli.org -p 1022
    
    [Install]
    WantedBy=multi-user.target
    

    Tell systemd that we have added some stuff:

    systemctl daemon-reload
    

    Start the service

    systemctl start autossh-mysql-tunnel.service
    

    Enable during boot time

    systemctl enable autossh-mysql-tunnel.service
    

    _

    This is basically all I found useful about AutoSSH. If you thing I have missed some important parts or you know any other cool stuff, let me know and I will update this post.

    Discussions

    The post SSH tunnelling for fun and profit: Autossh appeared first on Everything CLI.

  • Monday 25 January 2016 - 09:15

    This series has already covered a few basics about ~/.ssh/config in terms of how to simplify the usage of ssh tunnelling. In order to round this up a bit more, I will add some information you should be aware of about ~/.ssh/config. This is only intended to be a quick reminder about how it is done right and some useful hints you might not have heard about.

    The following uses examples with pure ssh connection commands, but it is also applicable to establish tunnels with ssh as they all read the same configuration file.

    Article series
    SSH tunnelling for fun and profit
    1. Local vs Remote
    2. Tunnel options
    3. AutoSSH
    4. SSH Config

    TL;DR

    Nope, this time you need to read it all.

    Structure of SSH Config

    Probably the most important part which is widely overlooked is the order of definition blocks in ~/.ssh/config and accordingly /etc/ssh/ssh_config in terms of generalization and specialization.

    You can basically categorize blocks into three stages:

    1. Most specific (without any wildcards)
    2. Some generalization (with wildcard definitions)
    3. General section (which applies to all).

    Let’s define a basic ~/.ssh/config containing the examples above and see what it does:

    Wrong way

    No, what many people do wrong is to define the general stuff at the top. Let’s do this for a second and see what the outcoming ssh connection string will be:

    Host *
        User root
        Port 22
        PubkeyAuthentication no
        ServerAliveInterval 30
    
    Host c*
        User cytopia
        Port 10022
        PubkeyAuthentication yes
        IdentityFile ~/.ssh/id_rsa__c_cytopia@cytopia-macbook
    
    Host c1
        HostName 192.168.0.1
    
    Host c2
        HostName 192.168.0.2
    

    If you want to ssh connect to c1 (ssh c1), the file is read as follows:

    1. Find section Host *
      1. Apply User: root
      2. Apply Port: 22
      3. Apply PubkeyAuthentication: no
      4. Apply ServerAliveInterval: 30
    2. Find section Host c*
      1. Ignore User (already defined above)
      2. Ignore Port (already defined above)
      3. Ignore PubkeyAuthentication (already defined above)
      4. Apply IdentityFile
    3. Find section Host c1
      1. Apply HostName: 192.168.0.1

    The final connection string that will be made internally will look like this:

    ssh root@192.168.0.1 -p 22 -i ~/.ssh/id_rsa__c_cytopia@cytopia-macbook -o PubkeyAuthentication=no -o ServerAliveInterval=30
    

    Now this is totally not what you intended to do!

    Right way

    Let’s restructure the ~/.ssh/config into the right order and check the resulting connection string:

    Host c1
        HostName 192.168.0.1
    
    Host c2
        HostName 192.168.0.2
    
    Host c*
        User cytopia
        Port 10022
        PubkeyAuthentication yes
        IdentityFile ~/.ssh/id_rsa__c_cytopia@cytopia-macbook
    
    Host *
        User root
        Port 22
        PubkeyAuthentication no
        ServerAliveInterval 30
    

    The important part to keep track of is the Host section (aligned to the left). Notice here that the general definitions are at the very top and more wildcarded definitions (using the asterisk *) are followed below.

    If you want to ssh connect to c1 (ssh c1), the file is read as follows:

    1. Find section Host c1 and use its corresponding HostName (192.168.0.1)
    2. Find more general section Host c* and use their values (User, Port, etc).
    3. Find most general section Host *
      1. Don’t use User as it has already been defined for this connection in c*
      2. Don’t use Port as it has already been defined for this connection in c*
      3. Don’t use PubkeyAuthentication as it has already been defined for this connection in c*
      4. Use ServerAliveInterval as there is no previous definition.

    So from that you must always remember that whenever a specific value has been found, it cannot be overwritten by values defined below. It is a first come first server here. The final connection string that will be made internally will look like this:

    ssh cyptopia@192.168.0.1 -p 10022 -i ~/.ssh/id_rsa__c_cytopia@cytopia-macbook -o PubkeyAuthentication=yes -o ServerAliveInterval=30
    

    Now this is how you intended to connect. So always remember:

    1. Specific definitions at the top
    2. General definitions at the bottom

    Why use SSH config anyway?

    Simpler usage

    Imagine you have a couple of dozens or even hundred servers you have to take care of. Each of them is having different login options such as: some still use passwords, others use rsa keys, others ed25519 keys, lots of different initial users to use for the connection and much more. Wouldn’t it be much more simple to define everything into a file and don’t care about the rest anymore?

    You could for example use a naming convention for clouded vs. dedicated hosts as so:
    c1, c2, c3, …, d1, d2, d3

    Or you use hosts per customer:
    google1, google2, google3, …, apple1, apple2, apple3

    All those hosts might have completely different settings even different ports and you simply need to

    $ ssh c1
    $ ssh d2
    $ ssh google1
    $ ssh apple3
    ...
    

    Other applications make use of it too

    Most programs that make use of ssh can use the same alias specified in ~/.ssh/config with the same options, simply by specifying the alias inside this program.
    For example on OSX I am using Sequel Pro to manage all my MySQL connections. Instead of having to specify host, user, port and certificate (in the ssh tunnel section), I simply only specify the ssh alias and it will auto-grab all details from my ~/.ssh/config.

    I am sure there are many other programs out there that are also able to make use of it.

    On top of that, if you need to alter settings of one server, you do it in a central place and it will have an effect on all tools instantly.

    Autocompletion

    You will have autocompletion (at least under bash or zsh) for every host and every alias defined. This is true for hosts and even IP addresses. When I type ssh 1 and hit tab:

    $ ssh 1
    192.168.0.1     192.168.0.2     192.168.0.3     192.168.0.4     192.168.0.5     192.168.0.6
    192.168.0.7     192.168.0.8     192.168.0.9     192.168.0.10    192.168.0.11    192.168.0.12
    

    Note: I have replaced the IP addresses with internal once.

    Hostnames

    $ ssh c
    c1                               c15.example.de                   c4
    c1.example.de                    c16                              c4.example.de
    c10                              c16.example.de                   c5
    c10.example.de                   c17                              c5.example.de
    c11                              c17.example.de                   c6
    c11.example.de                   c18                              c6.example.de
    

    Note: I have replaced the domains with example.de once.

    Defaults

    Within the most general configuration section you can define settings that must be applied for every ssh ... you type.

    So hopefully the ~/.ssh/config has raised your attention by now.

    What your mother never told you about ~/.ssh/config

    Identity leak via ssh keys

    If you are a big fan of ssh keys in combination with ssh-agent, then you should be aware that once you connect to any ssh server, all of your public keys that are hold by your ssh-agent, are sent to this server.

    You can check which keys are stored inside your ssh-agent via ssh-add.

    $ ssh-add -l
    4096 SHA256:111_SSH_HASH_111 /Users/cytopia/.ssh/id_rsa__host_root@me (RSA)
    256  SHA256:111_SSH_HASH_111 /Users/cytopia/.ssh/id_ed25519__host_user@me (ED25519)
    ...
    

    By default, if you do not manually add any keys via ssh-add, all defaults (no custom name) for rsa, dsa ecdsa and ed25519 (usually id_rsa, id_dsa, id_ecdsa and id_ed25519) are added to the ssh-agent (once they are created).

    So this means, if you have created one default rsa key simply by typing ssh-keygen, you will have this key ~/.ssh/id_rsa and this key will also be added to your ssh-agent by default.

    This means if you connect a lot to many untrusted ssh servers, they might log your keys (just like websites track you via cookies) and might be able to identify.

    The problem has been address by https://github.com/FiloSottile/whosthere/ which can identify your github name.

    Test if your ssh client settings are vulnerable to github identity leak:

    $ ssh whoami.filippo.io
    

    This is the example from: https://github.com/FiloSottile/whosthere/, make sure to visit this github page.

    What FiloSottile recommends is to turn off public key authentification in general and explicitly turn it on per host (where u need it):

    # Turn on pubkey auth per specific HOST
    Host c1
        HostName 192.168.0.1
        PubkeyAuthentication yes
        IdentityFile ~/.ssh/id_rsa_specific
    
    # Turn off pubkey auth for all hosts
    Host *
        PubkeyAuthentication no
        IdentitiesOnly yes
    

    Securing known_hosts

    Let’s look at a line of a typical ~/.ssh/known_hosts file:

    cvs.example.net,192.0.2.10 ssh-rsa AAAA1234.....=
    

    Space separated fields in order of occurance
    1. [optional] markers
    2. hostnames (comma separated)
    3. Bits, exponent and modulus
    4. [optional] comment (not used)

    This file is pretty talkative and can tell all the hosts you have visited so far and therefore has some implications. You can read more about the problems here: Protecting SSH from known_hosts Address Harvesting

    So in order to only store hashes of the hostnames inside ~/.ssh/known_hosts, you will need to alter ~/.ssh/config:

    Host *
        HashKnownHosts yes
    

    The hashed version for the file will look like this:

    |1|JfKTdBh7rNbXkVAQCRp4OQoPfmI=|USECr3SWf1JUPsms5AqfD5QfxkM= ssh-rsa AAAA1234.....=
    

    Note 1: Keep in mind that the hashing will start from now on and previous entries will not be hashed.

    Note 2: With hashing you will loose the autocompletion feature from known_hosts, but when you use aliases, you still have the alias based autocompletion described above.

    Multiple connections inside a single one

    SSH needs some time establishing a connection. This time grows as you use stronger/bigger private/public key-pairs. However, if a server receives a lot of connections, this time might matter and fortunately there is a way to reduce it by multiplexing multiple ssh connections over a single one from the same host/user by re-using an already established connection.

    So how can this be established? Again, you can configure this behavior inside your ~/.ssh/config globally as the following example shows:

    Host *
        ControlMaster auto
        ControlPath ~/.ssh/sockets/%r@%h-%p
        ControlPersist 600
    
    Option Description
    ControlMaster Tell SSH to re-use an existing connection (if there is already an established one) without having to authenticate again.
    ControlPath This is the path of the socket for open SSH connections. Every new connection will hook into this socket and can use the already established connection
    ControlPersist Keep the master (the first) SSH connection open for X seconds after the last connection has been closed. This means you have X seconds to connect again without authentification after all connections have been closed to this host.

    Where does this matter?

    Nagios

    If your nagios server does multiple SSH checks (check_ssh) against one server, it is recommended to setup the nagios’ servers ssh client to re-use existing ssh connections in order to speed up those checks.

    Git

    If you do a lot of work with git very frequently like pushing, using the autocomplete feature (which requires some remote connection to upstream), etc you are probably also a candidate to re-use existing SSH connections.

    I am myself are not a fan of enabling the whole thing globally (except for the nagios server), but rather for specific use cases.
    So If you want to enable this for specific hosts only you could do it like that:

    # For some host
    Host c1
        HostName 192.168.0.1
        ControlMaster auto
        ControlPath ~/.ssh/sockets/%r@%h-%p
        ControlPersist 600
    
    # For github usage
    Host github.com
        HostName github.com
        User git
        ControlMaster auto
        ControlPath ~/.ssh/sockets/%r@%h-%p
        ControlPersist 600
    

    Private ssh key leak

    Have you heard about the recents cve’s about possible private key leaks via ssh to a malicious SSH server: CVE-2016-0777 and CVE-2016-0778.

    In order to avoid this possible vulnerability add the following undocumented setting to your ~/.ssh/config at the bottom inside the general section:

    Host *
        UseRoaming no
    
    

    Useful tools

    As I have lots of ssh hosts configured in my ~/.ssh/config and it would be impossible for me to remember which domain is hosted on which server (especially if a single server is having more than 20 separate domains), I am using a little helper script, that will search my ssh configuration file for a given domain or any other keyword and will present me the server it is hosted on.

    For example, If I want to know on which server www.everythingcli.org is hosted I can simply type:

    $ sshf everything
    ------------------------------------------------------------
     c1
    ------------------------------------------------------------
     @vhosts: www.everythingcli.org another-domain.com
    

    So it told me, that there are two domains on server c1 including the one I was looking for (which will be auto-highlighted via grep --color). No I can simply go there via:

    ssh c1
    

    So if you find this useful, you can find the script at github:

    cytopia/sshf cytopia

    Eof

    I hope you enjoyed this little introduction to ~/.ssh/config and noticed that it is just as complex as the corresponding server configuration. Keep in mind that I just covered some basics mixed with a few specific examples. There is much more to this configuration file, so go on and read up about the power of your ssh client: man ssh_config

    _

    The post SSH tunnelling for fun and profit: SSH Config appeared first on Everything CLI.