Quantcast
Channel: Ivanti User Community : All Content - Linux and Unix
Viewing all 182 articles
Browse latest View live

2017.3 Linux Agent

$
0
0

Description:

 

The Installation of the new linux agent might not have changed on the front-end but we have made some major changes on how the agent works on the back-end.  The below doc describes the  nixconfig.sh which is now used for installation of the agent with some new features.  The change occurred to make the agent less bulky and consistency with other agents like Windows and Mac. These include agent settings, conf files and new switches for our components. We will be covering the installation, prerequisites and new OS support.

 

 

New Items:

 

  • Raspbian version 7 is supported with 2017.3 release with version 8 and 9 coming with SU1.
  • Changed agent to use conf files and No longer using .db files for inventory and switched to json.
  • Solaris HPux and AIX are still under legacy agent- the nixconfig.sh will install the agent as before but the changes have not been. Theses are on the roadmap to be updated.

 

Installation Guide

 

 

Nixconfig.sh is updated to remove legacy packages, install new component architecture packages on Linux hosts and support legacy agent on Unix (AIX, HP-UX and Solaris) hosts.

Notes:

  • The inventory package is installed with privilege escalation disabled so the inventory scan will not have system cache, memory bank information, etc.
  • Software distribution and Vulnerability remediation require privilege escalation which is turned on by default.
  • To allow privilege escalation, ensure sudo is setup with password-less access for the landesk user and set privilegeEscalationAllowed flag to true in all application specific configuration files (hardware.conf, inventory.conf, software_distribution.conf and vulnerability.conf) located in

/opt/landesk/etc.

  • The Core “push” installation will drop the Configuration shell script, Configuration INI file, Agent archives and nixconfig.sh scripts on the Linux hosts (Unix hosts will also get wget executables). The INI file will drive the installation so package selection, Core address, certificates, etc. will be based on the INI file contents. For “pull” installations, the administrator only needs nixconfig.sh, access to either cURL or wget, and the relavent command line options to perform the request action (install, upgrade or removal).

 

Push Install:

 

  1. Create a new agent
    asd
  2. Name and check the boxes needed ( UBuntu and Raspbian will not run vulscan but will download the files for when it's available- future release)
  3. schedule the agent and add the machines from a UDD scan. All the files needed are moved to the client for install. Prerequisites are not on the core and the client requests them from their respective version repo- redhat goes out to redhat servers.

 

Manual Install

 

Script Features:

  • Run in minimum shell – Bourne shell
  • Supports for running from no-exec mounted partitions
  • Support for non-root installations (sudo or RBAC)
  • Previous installation detection and upgrade support
  • Prerequisite reporting and installation
  • User defined repositories (yum and zypper only)
  • Handle agent install, removal and upgrade (remove/install combined)
  • Support same guid across upgrades (breadcrumb remains in /opt/landesk/etc/guid file)
  • Pull files from Core by cURL or wget (wget provided for AIX, HP-UX and Solaris)
    • 1st attempt is for individual RPM packages (future)
    • 2nd attempt is archive package (existing tarballs)

Script Usage:

 

Usage: ./nixconfig.sh [OPTION]...

LDMS Agent installation, upgrade and removal processing.

 

-a core        FQDN of LDMS core.

-c INI_file    Uses INI configuration file for installation preferences.

-d             Add debug lines to output.

-h             Prints help message.

-i pkg         Installs specified agent packages [all, cba8, ldiscan, sdclient or vulscan].

-l log_file    Log file for logging output [default: stdout].

-k cert_file   Certificate file.

-p             Install prerequisites - pulled from distribution repositories or Core.

-r pkg         Remove specified agent packages [all, cba8, ldiscan, sdclient or vulscan].

-R             With option -r, ensures the /opt/landesk directory is gone including the GUID file.

-u repo_url    Custom repository definition (Linux Only).

-D             Install assuming no network connection except to core. (Overrides -p and -u options).

 

Script Examples:

  • Installation from no-exec /tmp partition (very important to include the path to script):

./bin/sh /tmp/nixconfig.sh -a win-pt2tta27i1n.ivanti.com -i all -p

  • Installation from standard /tmp partition:

./nixconfig.sh -a win-pt2tta27i1n.ivanti.com -i all -p

  • Remove installation leaving breadcrumb (add -R to remove breadcrumb):

./nixconfig.sh -r allNotes:

  • -D allows the customer to install the Agent packages without access to a known repository but does require access to the Core.
  • - The “-u” (custom repo) option - you need to specify the URL to a proper repository definition file hosted on the RPM repository (http://www.example.com/example.repo).
  • - Custom repositories and prerequisite installations can be defined in the INI file but the Core UI does not support them at this point.
    • PRQ=YES under Products in the INI file will install Prerequisites.
    • A section with the following format adds a custom repository (secondary repos can be added by separating the strings with a space):

[Custom Repository]Repository=”http://www.example1.com/example1.repo

  • - To remove an agent from the Core via push the user can modify the INI file and set all of the products to NO (same as “-r all” on the CLI).
  • - Upgrades are handled by removing individual existing components and then reinstalling them.
  • - Prerequisites for Linux distributions will be pulled from the distribution repository.
  • - Prerequisites for Unix systems are pulled from the Core so the LDMS user would just need to put the proper packages under the proper OS directory on the Core.

Prerequisites

 

  • CentOS/Red Hat Enterprise Linux 6 Packages:

    • glibc, pam, xinetd, libgcc, libxml2, zlib, openssl, libtool-ltdl

  • CentOS/Red Hat Enterprise Linux 7 Packages:

    • glibc, pam, xinetd, libgcc, libxml2, zlib, openssl, libtool-ltdl

  • SuSE Linux Enterprise Server 11 Packages:

    • glibc, pam, xinetd, libgcc46, libxml2, zlib, util-linux, libtool

  • SuSE Linux Enterprise Server 12 Packages:

    • glibc, pam, xinetd, libgcc_s1, libxml2-2, libz1, openssl, util-linux, libtool

  • Ubuntu 14.04 and 16.04 Packages:

    • libpam-runtime, xinetd, libxml2, zlib1g, openssl, libltdl7

  • Raspbian version 8 (Jessie)
    • uuid-runtime, libpam-runtime, xinetd, libxml2, zlib1g, openssl, libltdl7

 

Configuration Files:

 

This info is fairly extensive and have created a separate doc to over the full details. Please see doc: 2017.3+ Linux Agent Conf Files

 

Script Tools:

 

Executable

Requires Escalated

Privileges

AIX

HPUX

Solaris

CentOS

RHEL

SLES

Ubuntu

apt-get

X

 

 

 

 

 

 

X

basename

 

X

X

X

X

X

X

X

chmod

X

X

X

X

X

X

X

X

chown

X

X

X

X

X

X

X

X

crontab

X

X

X

X

X

X

X

X

wget (or curl)

 

X

X

X

X

X

X

X

cut

 

X

X

X

X

X

X

X

date

 

X

X

X

X

X

X

X

dpkg

X

 

 

 

 

 

 

X

dpkg-query

 

 

 

 

 

 

 

X

ed

X

X

X

X

X

X

X

X

echo

 

X

X

X

X

X

X

X

expr

 

X

X

X

X

X

X

X

grep

 

X

X

X

X

X

X

X

groupadd

X

X

X

X

X

X

X

X

groupdel

X

X

X

X

X

X

X

X

gzip

 

X

X

X

X

X

X

X

id

 

X

X

X

X

X

X

X

kill

X

X

X

X

X

X

X

X

ls

 

X

X

X

X

X

X

X

mkdir

X

X

X

X

X

X

X

X

mv

X

X

X

X

X

X

X

X

paste

 

X

X

X

X

X

X

X

pkgadd

X

 

 

X

 

 

 

 

pkginfo

 

 

 

X

 

 

 

 

pkgrm

X

 

 

X

 

 

 

 

profiles

 

 

 

X

 

 

 

 

ps

 

X

X

X

X

X

X

X

pwd

 

X

X

X

X

X

X

X

rm

X

X

X

X

X

X

X

X

rmgroup

X

X

 

 

 

 

 

 

rpm

X

X

 

 

X

X

X

 

sed

 

X

X

X

X

X

X

X

sleep

 

X

X

X

X

X

X

X

sort

 

X

X

X

X

X

X

X

swinstall

X

 

X

 

 

 

 

 

swlist

X

 

X

 

 

 

 

 

swremove

X

 

X

 

 

 

 

 

sudo

X

 

 

 

 

 

 

 

tar

 

X

X

X

X

X

X

X

tr

 

X

X

X

X

X

X

X

uname

 

X

X

X

X

X

X

X

yes

 

X

X

X

X

X

X

X

yum

X

 

 

 

X

X

 

 

yum-config-manager

X

 

 

 

X

X

 

 

useradd

X

X

X

X

X

X

X

X

userdel

X

X

X

X

X

X

X

X

wget (or curl)

 

X

X

X

X

X

X

X

zonename

 

 

 

X

 

 

 

 

zypper

X

 

 

 

 

 

X

 

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 

Questions:

  • Regarding pre-req packages does the core now ship with these files?
    • For AIX, HP-UX and Solaris, if the customer puts the prerequisite packages on their core in ldlogon/unix/(aix|hpux|solaris)/ directory, they can use the -p option and the script will pull the prerequisites from the Core (if wget or curl is available).  Linux customers need to have an accessible RPM repository.  The prerequisites are not shipped with the Core at this point because we have some legal work to go through to ensure we can redistribute the packages without issue.
  • Does the installer look automatically for a .0 (public key) cert file in its “run” directory?
    • Yes – they can be specified on the command line, INI file or just placed in the run directory – all should work
  • Is there currently failover logic around the pre-req repositories?
    • We assume Linux distros will have access to a repository and yum/zypper setup properly to work (no “failover”).  Unix variants will only contact the Core for prerequisites.  If the Linux package manager doesn’t work, the prerequisite install will fail at this point.
  • Does the installer support offline installation?
    • Yes, if needed you can copy the nixconfig script, the INI, the .0, as well as the tar.gz files to the machine.  Then once you have set execution rights run these as root.

 

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  


2017.3+ Linux Agent Conf Files

$
0
0

Description:

This doc will cover the Conf files used by the 2017.3 Linux agent. In the etc directory, there is at least one configuration file (.conf suffix) per application as well as a “general” configuration file for all applications (landesk.conf). When the application starts up, it will load its individual configuration file (hardware.conf, inventory.conf, policy.conf,software_distribution.conf or vulscan.conf), which if it has an “imports” clause will apply additional configurations from the files specified if the parameters found in these additional files do not already exist in the application configuration file.

 

Application Configuration Options Explained:

  • imports – Defines additional general configuration files which should be applied.
    • filename – The full path of the configuration file to load.
    • type – Should always be keyvalue which is the type of file to load. (Other types not supported at this point).
  • taskTimeoutInSeconds – Maximum amount of time you want an Inventory to run before it is terminated with a hammer.
  • privilegeEscalationAllowed – Boolean which enables the use of the privilegeEscalationCommand value (default is false).
  • logging -  Definition of the possible log types and level to log.
    • level – The expected level of information needed in the log file: Error, Warn, Info, Debug
    • streams – List of streams to write log messages.
      • Types of supported streams: stdout, stderr, logfiles and syslog
      • Standard Output Stream Attributes:
        • name – Must be stdout or stderr
        • Logfile Attributes:
          • name – absolute path to the logile.
          • maxSizeInMB – Size the log file will reach before it is truncated or rotated.
          • numberToKeep – 0 means the file is truncated, >1 will rotate the logfile naming each rotated file with a .<number> suffix.
        • Syslog Attributes:
          • facility – logging facility to use as defined by syslog.
          • ident – any prefix wanted on each syslog entry.
  • commands – List of commands used by inventory
    • path – The path to the required inventory commands (dmidecode, localectl, proxyhost and timeout).
    • timoutInSeconds – The maximum number of seconds to run before the command is terminated regardless of completion (nothing will be reported).
    • privilegeEscalationRequired – Does the script require root access? If so, the “privilegeEscalationCommand” will be used to escalate privilege to root access.
  • modules – List of shared libraries (modules) to load when running a given configuration.
  • options – Defines the options allowed by the application.  These should not be updated.

 

landesk.conf Explained:

  • installPrefix – The directory where the application suite is installed. The agent itself can be relocated, however, the legacy pieces cannot so this option should not be changed.
  • defaultEnvironment – Defines the default environment to host scripts run by the Script component.
  • defaultShell – Defines the default shell to host scripts run by the Script component.
  • privilegeEscalationCommand – The command which allows escalation to root privileges without a password.
  • packageManagers – The list of package managers the agent is to use when performing actions like package listings, software distribution or vulnerability.
  • loggingLevel – If uncommented, defines the default logging level for all applications if the logging->level value is removed from the individual configuration files.
  • Core – Defines the host which represents the Core.
  • Device ID – Defines the random UUID value which represents the machine on the Core. This field is generated by uuidgen during installation.

 

hardware/inventory.conf Configuration Specifics:

  • scanType – Inventory scan type to return to the Core (Full, Hardware).
  • customInventories – List of custom commands or shell scripts to run (must produce expected JSON results).
    • path – The expected path where the script resides. If not found, the script is not executed.
    • timoutInSeconds – The maximum number of seconds to run before the command is terminated regardless of completion (nothing will be reported).
    • privilegeEscalationRequired – Does the script require root access? If so, the

“privilegeEscalationCommand” will be used to escalate privilege to root access.

  • consoleURL – The URL for posting results back to the Core.

 

policy.conf Configuration Specifics:

  • cacheDirectory – Absolute path to the storage area for the policy files after they are pulled from the Core.

 

software_distribution.conf Configuration Specifics:

  • cacheDirectory – Absolute path to the storage area for the software distribution files after they are pulled from the Core.
  • cacheExpirationInDays – Maximum number of days to keep a downloaded file in the cache directory.
  • cacheMaxSizeInMB – Maximum total size of the cache directory files. If the maximum size is reached, the cache will be cleaned up until it falls below the maximum size.

 

vulnerability.conf Configuration Specifics:

  • consoleURL – The URL for posting final vulnerability results back to the Core.
  • cacheDirectory – Absolute path to the storage area for the vulnerability definitions files after they are pulled from the Core.
  • cacheExpirationInDays – Maximum number of days to keep a downloaded file in the cache directory.
  • cacheMaxSizeInMB – Maximum total size of the cache directory files. If the maximum size is reached, the cache will be cleaned up until it falls below the maximum size.
  • language – The language specification for the vulnerability agent and content.

map-vulscan getting killed by OOM-killer

$
0
0

Noticed recently that some of our Linux systems map-vulscan runs are being killed by the oom-killer

 

How much memory is required?

 

Have tried to run manually using:

/opt/landesk/bin/map-vulscan -s -V 4 -l /tmp/vulscan.log -o /tmp/vulscan.prog

But that's not creating any logs

 

# free

              total        used        free      shared  buff/cache   available

Mem:        1017484       69416      867080        1588       80988      843164

Swap:        262140      259272        2868

vulscan on linux client fail to post vulneraibility.

$
0
0

Hello ,

All Our linux client (RHEL6 or 7) failed to connect to the core server while posting vulnerability

on the server, the message in logs is such as " fail with certification-based authentication"/ "client does not submit certificate" (translate from french ...)

in /var/log/messages we find,  : "proxyhost[73009]: localhost HTTP Send header failed -1"

in /opt/landesk/logs/vulscan.log, we find : "Unable to access core using 'http://MyCoreServer//WSVulnerabilityCore/VulCore.asmx'

 

additionnal infos:

/opt/landesk/bin/vulscan -v

vulscan 10.2.0.46

Copyright (c) 2003-2017, Ivanti

Mar 28 2017 00:30:12

 

 

Ivanti management console: version 10.1.10.287

 

 

thanks for your help !

 

regards,

Yves

How to scan custom data on Linux/Unix Platforms in detail (MAP-agent / before IEM 2017.3)

$
0
0

 

0 - Version Disclaimer / Clarification

This article covers the creation / use of custom data files with the LANDesk MAP-agent for Linux, used predominantly between versions of LANDesk Management Suite 9.5 and up to (and including) Ivanti Enterprise Manager 2017.1 !

 

Please be aware that as of Ivanti Enterprise Manager 2017.3, a freshly architected Linux agent (the "component architecture" agent) exists, which operates quite differently. A separate article will be created for the new agent and its use of custom data (linked to from here, once it's written up.).

 

 

I - Introduction

The purpose of this article is to familiarise the user with the ways in which the LANDesk *-IX inventory scanner picks up / process custom-data - and to do so in a manner to help people who may not necessarily have had a lot of experience with *-IX OS'es.

 

So whilst this has effectively very similar content to the Article here this one goes through the process without assuming any significant experience with either XML files and/or *-IX OS'es

 

This is a new feature that is currently only available with the 64-bit agent, the 32-bit agent is not able to pick up Custom Data by means of reading files. A "workdaround" for 32-bit agents up to LANDesk Management Suite 9.5 is provided towards the end of this document for anyone interested.

 

II - Getting Started

 

Before we begin - a few things to be aware of:

  • Case sensitivity (depends on your DBMS / its settings)

 

  • PLAN FIRST ... changing this stuff on the fly tends to lead to a huge mess. Knowing *WHAT* you want to collect and *WHERE* you want it to show up (ideally you're aiming for consistency with any Custom Data with your Windows-devices, if that is applicable) up-front reduces the considerable headaches involved in cleaning up overly enthusiastic but unplanned approaches.

 

  • Test and test thoroughly. Once you've set this "live", any changes other than "new additions" can be quite painful.

 

  • The file(-s) must be located in "/opt/landesk/var/customdata/". The "customdata" directory does *NOT* exist by default under "/opt/landesk/var/" - so you need to create it.

 

  • REMEMBER - you must change the ownership ownership of the following to landesk of:
    • The directory containing the custom data XML's (i.e. -"customdata")
    • any XML's themselves that you create.

 

You do this via running:

chown landesk:landesk <DIRECTORY_OR_FILE>

 

so for instance, if we want to change the ownership of the "customdata" directory, we can do so via:

chown landesk:landesk customdata

 

- or, if you want to use the full path, you can do so as well:

chown landesk:landesk /opt/landesk/var/customdata

 

  • REMEMBER - CUSTOM DATA GETS BLOCKED BY DEFAULT!

Whilst the data may be sent to your Core Server, remember that you need to "un-block" it as per this document HERE.

 

  • To test / debug your XML's, run

./opt/landesk/bin/map-reporter -V 255

 

This will launch the part of the inventory scanner that (among other things) processes the custom data. The "-V 255" flag enabled maximum verbosity. This way, you can easily check whether there are any problems with the files themselves (usually the act of forgetting to change ownership of the files), or with their data structure (and there's a few potential gotchas here to stumble into).

 

III - The Custom Data XML-file structure

 

Here's a generic template for the format you need to follow (in essence - you're looking at a pretty standard XML file)::

<?xml version="1.0"?>

<!-- OPTIONAL COMMENT #1 -->

<!-- OPTIONALCOMMENT #2 -->

<!-- OPTIONALCOMMENT #3 -->

<NAME_OF_CONTAINER_UNDER_CUSTOM_DATA>

        <OPTIONAL_CONTAINER_SHORTNAME name="Some Optional Container Name">

                  <MyData1>VALUE</MyData1>

                  <MyData2>VALUE</MyData2>

                  (...)

                  <MyLastData>VALUE</MyLastData>

        </OPTIONAL_CONTAINER_SHORTNAME>

</NAME_OF_CONTAINER_UNDER_CUSTOM_DATA>

 

 

... and we would aim to achieve something like the following for our custom data:

 

Note that you can EITHER use "your intended names" directly (as we do in the case of "<LongWindedWay>"-line below), or you can use a shorthand variable inside the XML whose 'full display name' you define (such as in the line "<LDCF name="LANDesk Custom Fields">"-line below)

 

A filled out Custom Data XML-file can then look like so:

* Note - for ease of readbility, I'm marking XML comments in THIS COLOUR.

<?xml version="1.0"?>

<!-- LANDesk Unix/Linux Agent - Custom Data XML Example File -->

<!-- Version 1.2.3.4 -->

<!-- My Team / My Date -->

 

<!-- You can use long-names in the XML for your categories / attributes, such as for "Asset Information"-->

<!-- Note that you MUST NOT have spaces in your object-names/tags! -->

 

<!-- Addition of White Space for added readability is fine -->

 

<Asset_Information2>

    <LDCF name="LANDesk Custom Fields_2">

<!-- Note that I use "REL-L-NUM" here! -->

        <RELLNUM name="Location RELEASE NUMBER">9.5.0.1</RELLNUM>

        <LOC name="Location In The World">London</LOC>

        <LongWindedWay name="Comment">

            This is me. Aren't I pretty?

        </LongWindedWay>

    </LDCF>

 

<!-- Note that you *CANNOT* use multiple "root" categories in a single XML. -->

<!-- So you cannot have both the "ASSET_INFORMATION" and the "USERINFO" groups in a single XML. -->

<!-- Just use multiple XML's in this case - the inventory scan will pick them all up.-->

 

    <SC name="Some Other Category_2">

<!-- Note that I use "REL-U-NUM" here! Beware of similar attributes (such as REL-L-NUM above), it can make mistakes more likely -->

        <RELUNUM name="User RELEASE NUMBER">9.5.0.1</RELUNUM>

        <ORGUNIT name="Organisational Unit">MyUnit</ORGUNIT>

    </SC>

</Asset_Information2>  

 

This would then present itself in inventory as per the following two screenshots:

New_Part1.jpg

New_Part2.jpg

 

PLEASE NOTE:

It does *NOT* matter in what order you add the custom inventory data. By default, LANDesk will sort things alphabetically for you in the console anyway (as you can see in the screenshot used above). The below version of the very same XML-data has been reformatted to closer represent the inventory scan showing on the screenshot.

 

Keeping to such an alphabetic theme might make things easier for you to work with whilst your designing your custom data / if you have to edit it, but is not a requirement in the slightest.

 

 

 

IV - How to get multiple bits of Custom Data in

 

This is a favourable option because it's easier to control & modify and you can have segragated data (i,e, a "Asset Information" and a "User Information" category for instance).

 

Also this makes it a lot more manageable if you need to update anything - if you have different scripts collecting different bits of custom data, it's much easier to just have "one function/script change one small file" rather than having a singular, massive XML.

 

This is by far the more manageable an friendly option.

 

IV.2 - Option 2 - A really long file

This is less recommended for two reasons:

* You're more likely to make mistakes (more lines == more likelihood, after all), and ...

* A single error would effectively prevent ALL of your custom data from being processed, rather than "just the particular part file".

* More complex to update and manage via scripts.

 

I'm not saying "it can't be done" (as I'm certain some people might take it as a challenge) - I'm merely highlighting the fact that you're inviting a lot of considerable problems with really no benefit in return.

 

V - Don't-s and Do-s

Just a recap and listing of experiences made.

 

V.1 - Don't-s...

* Don't forget to run "chown" and change ownership over to "landesk:landesk" on the directory as well as the XML file(-s), else we may not be able to read the contents of the file(-s).

 

* Don't assume that you get everything right the first time round. Test & re-test as it's it's simple to do. Remember - all it takes is "map-reporter -V 255". The verbose option will generally be your best friend to highlight problems with your XML's.

 

* Don't be adventurous with naming your XML-tags or object names. Generally, sticking to the following list is "safe":

- Characters from "a-z"

- Characters from "A-Z"

- Numbers (so 0-9)

- Using a dash ( - ) and an underscore ( _ )

 

XML doesn't necessarily deal very well with certain characters outside of those listed above - so try and avoid that risk.

 

Just to clarify, I'm talking about the XML-tags and object-names only here. So in the example given above in Section III, XML-tags would be things such as:

  • The "LDCF" and the 'LANDesk Custom Fields_2" in ... <LDCF name="LANDesk Custom Fields_2">
  • The "RELLNUM" as well as the LOCATION in ... <RELLNUM name="Location RELEASE NUMBER">9.5.0.1</RELLNUM>

 

Whilst certain characters such as ( & ) or ( * ) are relatively commonly avoided, some unexpected stumbling blocks do occur. For instance, I was reading out a certain piece of hardware information and the data that was given to me was a single string with a few ( : ) colons in it ... due to uniqueness concerns, we ended up trying auto-naming the attribute with such a ( : ) in it ... such as "Disk:0" for instance ... and this can get you into trouble.

 

Using such characters in the DATA VALUE tends to be fine (so that'd be for instance the "9.5.0.1" string in the above example of RELLNUM) but outside of that, be mindful that XML can scupper your well-laid plans. Test and re-test.

 

V.2 - Do...

* ... think carefully about what custom data you WANT to collect and what you may want to collect in the future / further down the line.

 

* ... design your XML's sensibly with this in mind will save you headaches further down the line (moving data from "place A" to "place B" is a pain - not only would it need to be done in the database, but in the XML's as well, and you'd be potentially facing some clients that don't update properly, etc. - so all sorts of fun to go wrong with data in (potentially) multiple places for different devices. Data consistency is your friend!

 

 

 

VI - What about 32-bit agents?

Due to current differences in architectural differences between the 64-bit MAP agent and the 32-bit agent the above process is not applicable to 32-bit agents.

 

If you end up being in a situation where you do need to add custom data to a current (up to LANDesk Management Suite version 9.5 at the time of writing) is essentially the following:

 

1 - Run inventory with the option to create an output file.

2 - Append your desired data in regulard inventory scanner format to the output file.

 

For the example used here, the correspondonding format data would look like so:

Custom Data - Asset_Information2 - LANDesk Custom Fields_2 - Location RELEASE NUMBER = 9.5.0.1

Custom Data - Asset_Information2 - LANDesk Custom Fields_2 - Location In The World = London

Custom Data - Asset_Information2 - LANDesk Custom Fields_2 - Comment =             This is me. Aren't I pretty?

Custom Data - Asset_Information2 - Some Other Category_2 - User RELEASE NUMBER = 9.5.0.1

Custom Data - Asset_Information2 - Some Other Category_2 - Organisational Unit = MyUnit

 

VI.A - 32-bit MAP agent with IEM 2016 onwards

Please note that as of IEM 2016, the MAP-agent does come in 32-bit format, so the use of the XML's described here is fine with that.

 

VII - In Conclusion

This article should provide all of the information that you might require to work with custom data on *-IX operating systems, even if you have very little experience with it.

How to scan custom data on Linux Platforms in detail (IEM 2017.3 onwards)

$
0
0

 

NOTE:

The screenshots will expand to full size if you click on them!

 

So if you can't read something, just click on it to see it in its full size!

 

I - Introduction

With the introduction of Ivanti Endpoint Manager 2017.3, a completely new & re-architected agent for Linux has been introduced.

 

As part of the essentially total improvements & changeover in "how things worked", one key item is also the switch from the use of XML files for custom data over to JSON format data which can be fed to the scanner.

 

II - Getting started - what you'll need & the basic overview of steps

There are some "monkey see - monkey do" type copy & paste examples included, so don't be intimidated.further below (see chapter VII - a few examples) that will walk you through the process!

 

Here's the list of things you need to have / be able to do to use custom data with the new component architecture agent. It's not difficult:

  1. You need to be familiar with JSON file/data structures (it's not hard - a bit of googling will get you there, as will a "monkey see / monkey do" approach to copy & pasting examples used here)
  2. You will need to be able to edit (sensibly) one or several config files (via VIM, via GEDIT, or via whatever you prefer).
  3. You will need to be able to at least write a bash script to produce the JSON data(examples are included in this article!), though you can get fancy too and use actual programming languages to the same effect! The important thing is the output!

 

... and that's it by and large!

 

Now - let's have a look at the data structure itself!

 

A brief interlude to point to useful reference material for further information that this article will touch on, but no cover in depth:

 

III - The JSON structure for custom data

JSON is not "radically new" - it's been around for a while. It's "sort of" simple to learn, but tends to be hard to master (you'll find yourself getting a missing comma or a bracket wrong somewhere for a while).

 

The good news is that it's VERY easy to use with modern (free) editors, such as Notepad++ or VS Code from Microsoft (which is ESPECIALLY good at locating problems with any JSON structure you're building I found).

 

A few introductory guides to JSON structures can be found here (leading to 3rd party sites):

 

Other than that, the basic structure of the resulting inventory data is actually pretty simple (since inventory is "just" a tree-like structure in a text-file at the end of the day). How you GET there (whether via a fancy program / script or via a simple "spit this out on screen" type bash script) doesn't really matter. The inventorys scanner only cares about the result ... not how easy or hard it was to get.

 

III.A - Quick reminder on JSON files ... comments not allowed!

An important reminder on "proper" JSON structure -- please be aware that it is NOT ALLOWED to add / make use of comments.

 

Inconvenient though this is, it is an intentional part (rather than oversight) by the people behind the standard ... so include comments in your bash (or whatever) script(s) about what you're doing and why - but not the JSON output itself. Contrary to the MAP-agent (which used XML-files) you cannot include comments as part of your data!

 

III.B - On to the Custom data structure proper!

First off - a clarifying picture on what is a data attribute (essentially "the actual end data") and what is a data object (hierarchical data objects / "directories"), as represented

Data OBJECTS vs Data Attributes.jpg

Data OBJECTS can be called whatever you want, but must be declared as "labels"!

Data ATTRIBUTES can be called what ever you want as well.

 

Once you've visualised / decided on WHERE you want to have your data end up (and how), it's now easier to essentially reverse engineer how your custom data file should look hierarchically.

 

In examining the custom data file structure (examples are included below in chatper VII), there's a lot of "freeform" allowed. There's only a few keywords that MUST be respected - let's examine those now!

 

III.C - IMPORTANT - Keywords!

Just to have a specific list - here are the crucial keywords that are necessary within a custom data JSON:

 

Keyword (CASE SENSITIVE)ExplanationExample
labelA label is essentially the "directory name" as per the inventory."label" : "My Basic Test"
customData

necessary / require data tag.

NOTE - is CASE SENSITIVE

"customData"
"containers"

If you want/need "sub directories" in inventory (be it 1 or several), this is how

such contents are defined / declared

"containers" :[

{stuff}

]

 

 

III.D - How to get the data?

This is actually down to personal preference. You can use PYTHON scripts, shell scripts ... anything that executes, as long as the output is in a JSON format.

 

The agent comes with a Python script that tries to scan for DOCKER containers for instance.

 

For the purposes of this article, however, we're just going to use the simplest method - a bash script which purely handles outputting the JSON format.

 

You can have a separate script come up with said "output", or have a script create an "output only" script ... you've got a lot of choice!

 

 

IV - How to get multiple bits of Custom Data (and where to configure such things)

 

IV.A - Where to edit & which file needs looking at

The files you're interested in will be under "/opt/landesk/etc/" - and while in principle you could add the custom data section to either the "inventory.conf" or the "hardware.conf", it makes the most sense to stick with the logical & thus work with the "inventory.conf"-file in the first place.

CONF-File-Location.jpg

 

IV.B - Examining the .CONF in detail

Open up the .CONF file in you preferred text editor, and look for the following section... (red highlight) - this is a default file without any changes as yet.

CONF-File_ByDefault.jpg

As should be evident, this is a JSON file structure, so you need to respect this / follow this when editing the file.

 

Conveniently - a custom inventory script is already included with the agent (checking for DOCKER data). So we just need to copy and edit the relevant section.

 

IV.C - Adding our own custom script

 

Step 1 - let's begin by copy & pasting an entry to the array of custom data sections.

Step 2 - Don't forget to add a "," after the first array object of custom data inventories !

Step_1_2_CreateSecondEntry.jpg

Step 3 - edit the copy to make it fit your purposes:

  • In this example, the script we'll be running will be "/opt/landesk/bin/dataadvanced.sh" (if you're going to be a lot of this, consider where you want to leave those scripts)
  • Our example does NOT require escalated privileges, so we'll switch that over to "false"
  • While the bash-script really doesn't need it, and purely for eaxmple, we'll extend the time out over to 20 seconds.

 

So our entries should look like so after the changes:

Step_3_ChangesDone.jpg

Now - we are ready to save the file and are good to go. Well - nearly.

 

IV.D - (OPTIONAL) Remembering privilege escalation

Notice that by default, privilege escalation is DISABLED in the agent.

Step_4_PrivilegeEscalation.jpg

 

Depending on what you're after, you MAY need to set this to "true". For instance, the Docker-container custom data piece will require privilege escalation.

 

This is purely a "playing nice" for the various Linux Admins affair that we do not assume / escalate priviliges by default. Obviously - in order *TO* escalate privilges, we need to have those available to us first (so this is why you may want to install the agent will full priviliges, rather than just a limited set). This sort of permission stuff can / will likely trip you up -- hence, be mindful of it & test. .

 

IV.E - And you're done - now run inventory

And that's pretty much all there is to THIS part. The real work is usually in building the custom data file/script itself.

 

Telling the scanner to run said script(s) is very simple .

 

We'll be covering a few examples for custom data scripts further below in Chapter VII !

 

V - Don'ts and Do's

This section includes a variety of snippets of learned experiences which should hopefully help in ensuring that you attain success with fewer stumbling blocks!

 

V.A - DON'T-s ...

Please DON'T ...

  • ... forget forget to test your scripts.
  • ... forget to consider whether you really *DO* or *DO NOT* need privilege escalation on your scripts.
  • ... forget to consider a legitimate (but useful) timeout value for your script(s). If the script takes too long, the scanner will just continue on.

 

V.B - DO-s

Please DO ...

  • ... watch out for JSON notation.
    • Remeber to include ","-s (commas) at the end of most lines
    • ... EXCEPT for the last line of a container.
    • Equally, the last element of an array does not have a "," after it's closed off.
  • ... expect to get JSON notation wrong somewhere down the line anyway (it's easily done) . Just test & fix it.
  • ... make sure you test! (yep - calling it out twice, as it's that important!)

 

 

VI - Considerations around privilege escalation

Think carefully around what script really does / does not need privilege escalation.

 

As a security measure, it MAY be easier to just have a separate script do the "data gathering" for you, and have the script that the Ivant Inventory Scanner makes use of just "spit out the result data" (not doing any actual interrogating itself).

 

All comes down to how sensitive your NIX-admins are as well as your systems. At the end of the day, as long as the Ivanti scanners can get "to the results", they may not need elevated privileges at all!

 

VII - A few examples

This section is meant as a starting aid for many people. A simple "monkey see - money do" approach to familiarise yourself and have a "known good" example to start working from.

 

Once the basic structure is something you're comfortable with, there's nothing to stop you from generating this sort of JSON output on the fly as part of a more sophisticated script.

 

So these scripts are "intentionally simple" in JUST spitting out the JSON format, so as to not distract from anything but the data and its relationships!

 

A brief interlude - remember custom data gets BLOCKED by default!

Please be sure to read this article here -- Custom Data is not Entered - Using the Unknown Items Inventory Tool  -- to be aware of / understand how and why custom data gets BLOCKED by default.

 

So you may need to scan 2x (a 2nd time after enablig the relevant data items)!

 

VII.A - "Hello World!" - my first basic custom data piece

Let's begin with something really simple. In the tradition of a lot of IT "firsts" ... a simple "Hello World!" piece of custom data!

 

First - the basic bash script ("helloworld.sh") itself:

#!/bin/sh
cat <<__JSON__
{  "customData": {    "label" : "Hello World Test",    "Write it here" : "Hello World!"  }
}
__JSON__
exit 0

 

... how it looks inside the inventory scan:

(...)

Custom Data - Hello World Test - Write it here = Hello World!

(...)

 

And finally - how it looks as part of inventory

Hello World.jpg

 

VII.B -"Let's get cooking!" - A slightly less basic 1:1 custom data item

As an actually useful example, we're going to do a multi-level set of straight up 1:1 data.

 

The bash script ("databasic.sh") itself:

#!/bin/sh
cat <<__JSON__
{  "customData": {    "label" : "Basic Test",    "Basic Value Test" : "I am a test on a basic level",    "containers" : [      {        "label" : "My Custom Data",        "Deeper Level Test" : "I should appear on level 2",        "containers" : [          {            "label" : "I am truly DEEP",            "containers" : [              {                "Deepest Level Test" : "I should be on level 3",                "Available Memory" : "1234567890",                "Total Memory" : "0987654321"              }            ]          }        ]      }    ]  }
}
__JSON__
exit 0

 

The output in the inventory scan file:

(...)

Custom Data - Basic Test - Basic Value Test = I am a test on a basic level

Custom Data - Basic Test - My Custom Data - Deeper Level Test = I should appear on level 2

Custom Data - Basic Test - My Custom Data - I am truly DEEP - Available Memory = 1234567890

Custom Data - Basic Test - My Custom Data - I am truly DEEP - Deepest Level Test = I should be on level 3

Custom Data - Basic Test - My Custom Data - I am truly DEEP - Total Memory = 0987654321

 

(...)

 

And here's what it looks like in the inventory tree:

 

The 1st level /  branch of data:

BasicData_Level_1.jpg

The 2nd level / branch of data:

BasicData_Level_2.jpg

 

And the 3rd and final level /  branch of data:

BasicData_Level_3.jpg

 

VII.C -"What sorcery is this?" - Dealing with 1:* (one-to-many) data!

Dealing with 1:* data is actually pretty simple - one handles the primary key simply via the label. It's really that simple -- no surprise complications .

 

The bash script ("dataadvanced.sh") itself as an example:

#!/bin/sh
cat <<__JSON__
{
"customData": {    "label" : "My Advanced Test",    "containers": [        {        "label" : "(Path:/tmp/myfile.exe)",        "Standalone value test" : "I stand by myself",        "containers" : [            {                "File Size": "1234567890",                "Package Size": "987654321"            }        ]    },    {        "label": "(Path:/opt/landesk/SomethingElse.sh)",        "Standalone value test" : "I too stand by myself!",        "containers" : [            {            "File Size" : "123045607890",            "Package Size" : "998877665544332211"            }        ]        }    ]
}
}

__JSON__exit 0

 

The output for the above as it looks in the inventory scan file:

(...)

Custom Data - My Advanced Test - (Path:/tmp/myfile.exe) - Standalone value test = I stand by myself

Custom Data - My Advanced Test - (Path:/tmp/myfile.exe) - File Size = 1234567890

Custom Data - My Advanced Test - (Path:/tmp/myfile.exe) - Package Size = 987654321

Custom Data - My Advanced Test - (Path:/opt/landesk/SomethingElse.sh) - Standalone value test = I too stand by myself!

Custom Data - My Advanced Test - (Path:/opt/landesk/SomethingElse.sh) - File Size = 123045607890

Custom Data - My Advanced Test - (Path:/opt/landesk/SomethingElse.sh) - Package Size = 998877665544332211

 

(...)

 

And here is what it looks like in the inventory tree:

The basic "root" of the 1:* data:

AdvancedData_1.jpg

 

And both sub-branches now:

AdvancedData_2.jpg

- and -

AdvancedData_3.jpg

 

IMPORTANT NOTE

Please do remember that 1:* data CANNOT be used as custom data without modelled DB-tables.

You need to have specifically crafted DB tables that are set up to accept 1:* style data!

 

For those who make use of 1:* data (one to many data - such as software package entries), be reminded that due to the nature of such data it is REQUIRED to go into existing (i.e. - modelled) database tables. It is not possible to setup 1:* data structures up on the fly in the IEM database for various reasons that we won't be going into here.

 

1:* data is "more complicated" to handle, and as such REQUIRES modelled tables. This can be achieved in a variety of ways:

  • Use of Ivanti and/or preferred partner consultancy
  • Use of Data Analytics Database Doctor (PLEASE be careful & go through this on a test system first).
  • ... the author hopes/intends to provide a separate white paper on this subject to be published at some point, but it's still a work in progress.

 

VIII - Troubleshooting

 

VIII.A - What happens if my timeouts are too long?

The inventory scanner will complain upon being run.

 

For instance, in this example, I've (intentionally) configured my items to take longer than the default 30 seconds of "total runtime" that the inventory scanner is configured with by default. This resulted in the following error screen upon running inventory:

Timeout_Too_Long.jpg

 

And here in text form:

[root@CentOS-7-64 bin]# ./ldiscan -f

FAIL - /opt/landesk/bin/plugin: [0xb00006e] Task will timeout prior to timeout of all commands and customInventory components: 30 <= 35

[root@CentOS-7-64 bin]#

 

... either revise your timeout values, or edit the timeout value in the "inventory.conf" file, as highlighted here:

TotalTimeout.jpg

 

VIII.B - What happens the JSON format is malformatted?

The inventory scanner will error out & complain with something along the following lines:

[root@CentOS-7-64 bin]# ./ldiscan -f

FAIL - /opt/landesk/bin/plugin: parse error [approximate line: 1] - unexpected '}'

[root@CentOS-7-64 bin]#

 

Screenshot of an actual example (get used to this as you're taking your own first attempts) :

JSON_Parsing_Failed.jpg

 

Note that the error message can only guesstimate at the actual line number for a variety of reasons.

 

If you're dealing with larger JSON files & it's nothing obvious, then GENERATE the JSON output, and copy & paste the contents into any JSON-parser / editor (such as Notepad++, XMLSpy and so on). That should help you see where you're likely forgotten either a bracket or a comma quickly and efficiently!

 

The error usually hints in the correct direction (missing / surplus brackets and/or missing / surplus commas), but can be limited. Play around with it .

 

Don't forget about the following command-line options to help you along when troubleshooting custom data items:

 

Keep in mind that (much like so much else in *-IX) - these options are case SENSITIVE!

  • "-V" -- run verbose logging (by default into "/opt/landesk/log/inventory.log" unless specified otherwise)
  • "-f" -- forces a software scan (useful as a full sync scan).

 

  • "-o {filepath/filename}" -- creates a local output file.
  • "-l {fielpath / filename}" -- allows you to specify a separate filepath / filename for the log entries. Useful to be deleted in between re-runs.

 

VIII.D - Using the correct tools...

From personal experience, the author would recommend the following tools to help diagnose issues with JSON files:

  • VS Code (by Microsoft) - free and surprisingly competent!
  • XMLSpy (quite versatile and very helpful with the GRID representation for large JSON files)

 

IX - What about UNIX systems (Solaris & co)

Solaris & other UNIX systems presently continue to make use of the (older) MAP-agent.

 

Detailed information on how to use custom data for the MAP-agent can be found here -- How to scan custom data on Linux/Unix Platforms in detail (MAP-agent / before IEM 2017.3)  !

There were mismatches with current inventory data. Device ID within the Inventory does not match the Device ID reported by the Agent for Linux.

$
0
0

LANDESK / Endpoint Management Console > Diagnostics > Real-time discovery reports "There were mismatches with current inventory data."

Device ID within the Inventory does not match the Device ID reported by the Agent for Linux.

Affected machine is a Linux computer with the Agent for Linux installed.

 

screenshot there were mismatches with current inventory data.png

 

 

RESOLUTION

 

1. Delete ( if exists ) the affected Linux computer from the Inventory.

2. Delete ( if exists ) the affected Linux computer from under the Configuration > Pending Unmanaged Client.

3. Manually remove the Agent for Linux. On the the target Linux host execute the following commands to manually uninstall the Agent for Linux

 

su -

mkdir /install

cd /install

wget http://coreserver.domain.net/ldlogon/unix/nixconfig.sh

chmod +x nixconfig.sh

./nixconfig.sh -r all -R

 

* Change http://coreserver.domain.net to your core server name and your domain

 

4. Install / reinstall manually the Agent for Linux. On the the target Linux host execute the following commands to manually install / reinstall the Agent for Linux

 

wget http://coreserver.domain.net/ldlogon/agent_linux_x64.ini

 

* please substitute the filename 'agent_linux_x64.ini' to what you have in your environment in \\coreserver.domain.net\ldlogon

 

./nixconfig.sh -a coreserver.domain.net -c agent_linux_x64.ini -i all -p -k 29264735.0 -l /install/agent_install.log

 

* please substitute the filename 'agent_linux_x64.ini' to what you have in your environment

** please substitute the certificate filename '29264735.0' to what you have in your environment in \\coreserver.domain.net\ldlogon

 

 

NOTE

 

On the Windows Core server there is a service "LANDesk Inventory Server" - check if the service "LANDesk Inventory Server" is running and if it is not, please start / restart the service "LANDesk Inventory Server". Wait few minutes and check if the Linux machine will appear / reappear in the Inventory.

How does Patch Manager 2016 manage Linux patches?

$
0
0

Dear All,

 

I just started working with Patch Manager on Linux. I notice that for LDPM 9.x version, it can integrate with Linux Vendor Tools to patch 32 bit linux. For LDPM 2016, patch manager no longer work with 32 bit linux. Would you please help with the questions below?

 

1. If I have both 32 bit and 64 bit linux and have patch manager 2016. Can I work with the 32 bit clients using the 9.x agent connecting to the patch manager 2016 server?

2. Can 2016 linux client agent (64 bit) integrate with Linux vendor tools (e.g. Redhat Satellite) to provide automatic patching and dependency checking function?

3. Is there any getting start document on working with Linux 64 bit system with LDPM 2016? Not sure if this, How to leverage Linux vendor tools to remediate vulnerabilities  still work with LDPM 2016? Looks like LDPM 2016 need manual download of Linux patch again?

 

Thanks!


Linux Agent Install Fails

$
0
0

Hi,

 

Fairly new to the Ivanti world, so please bear with me if some of the points here are a little obvious.

 

Anyway, we have LDMS 2016 v 10.0.0.271, and are looking to manage out *nix estate with it for security and compliance purposes. Using the document here - General information on deploying the Linux agent to various flavors of Linux. I managed to get the task scheduled, but it failed with Agent refused access (unauthorized) with a return code of 1895.

 

Some more googling tells me that I need to add root in as an alternate credential on the scheduler config page - which I had already done. However, on doing so, it through up a message

 

 

That's not a major issue, we can create a service account for this and all is good.

 

However, will this actually make a difference. The root account is not permitted to logon over SSH (security risk) so wondering if it was possible to use a pre shared key to logon without a password ?  Or any other solution ?  Is there a way to tell LDMS to use SUDO to run the commands ?  Or are we going to have to install manually (we have approx 100 linux servers of various flavours, so scheduling installation would be far preferable) ?

 

On the manual install note - that failed too   with the following error :-

 

ERROR: Missing or empty setup.conf file; can't configure

Error 3 returned from executing baseclient64's setup.sh

Exiting with return code 3

 

Any help/suggestions would be most welcome.

 

Thanks in advance.

Linux Agent and OS-Provisioning

$
0
0

Hi,

 

we got the requirement from our development team to support linux as desktop OS.

The development wish to get Fedora 27 or Ubuntu 17.10.  Both are not supported by Landesk.

It is possible to install the agent anyway?

 

The other problem we have is the OS-Provisioning. All our Systems has to be encyrpted but as fare as I know landesk only supports provisioning linux with ImageWv2 (No Kickstart or Preseed).

Is there a way to encrypt the whole system after the provisioning?

 

 

We use Landesk 2017.3

 

Best regards

Heino

SUSE 11 Sp1 not showing detected vulnerability Issue

$
0
0

Hi All

 

I am able to install ivanti 2017.3 agent in the Suse Linux server 11 SP1 but system is not able to detect vulnerability.

 

Client details:

 

SUSE Linux Enterprise Server 11 (x86_64)

VERSION = 11

PATCHLEVEL = 2

 

Kernel: 3.0.101-0.7.17-default

 

but all detected showing only one, bu when i try to do in 11 SP2 hosts its showing detected packages.

 

11Sp1:

 

How To: Push a Linux agent using a non-root user with some agent install troubleshooting notes (only for version 2017-3 and newer)

$
0
0

Description: This article covers some basic guidelines on how to configure Ivanti to push a Linux agent without using the root user. This article is to be a guide for entry level to advanced Linux users.

 

Note:This article is only for 2017-3 and newer versions of Ivanti.

Note:The following commands were utilized on a CentOS 7 x64 machine and are to be used as a guide. Syntax may vary on other distributions or versions as well as how certain files are edited. Please reference the Linux documentation for the version that you're working with.

 

Adding a user:

useradd -m -d /home/landesk landesk

 

Setting the user password:

passwd landesk

 

Editing the sudoers file:

1- Log into the machine as the root user.

2- At the prompt type: visudo

3- This command should bring up the sudoers file. Find the section entitled: "## Read drop-in files from /etc/sudoers.d (the # here does not mean a comment)". Add the following lines as highlighted in the example below for the user you created:

 

## Read drop-in files from /etc/sudoers.d (the # here does not mean a comment)

#includedir /etc/sudoers.d

landesk ALL=(ALL) NOPASSWD: ALL

landesk ALL=(root) NOPASSWD:SETENV: /usr/bin/chmod, /usr/bin/chown, /bin/crontab, /usr/bin/crontab, /usr/bin/systemctl, /usr/bin/ed, /usr/bin/ex, /usr/sbin/groupadd, /usr/sbin/groupdel, /usr/bin/kill, /usr/bin/mkdir, /usr/bin/mv, /usr/bin/yum, /usr/bin/rpm, /usr/bin/rm, /bin/rm, /usr/sbin/useradd, /usr/sbin/userdel, /usr/bin/nohup, /opt/landesk/bin/ldiscan, /opt/landesk/bin/vulscan

Defaults:landesk !requiretty

 

Testing a user to see if they are configured correctly:

1-  Using putty.exe or some other terminal application connect to the Linux machine using the non-root account.

2- Type in the following commands as shown in the screen shot below. The command "sudo -l" will display the commands the user can run. All of the commands placed in the sudoers file should show here. Running the command "sudo su" SHOULD NOT prompt for a password as the screen shot shows: 

 

Linux3.png

 

Troubleshooting Notes:

 

Example RAXfer.log from the core when a Linux Agent install works fine:

 

Tue, 19 Dec 2017 12:36:16 Task complete, status 7

Tue, 19 Dec 2017 12:39:58 Processing task 8

Tue, 19 Dec 2017 12:39:58 Processing task 8

Tue, 19 Dec 2017 12:40:00 1 machines targeted to task

Tue, 19 Dec 2017 12:40:00 Getting handler associated with scheduled task.

Tue, 19 Dec 2017 12:40:00 1 machines for taskID 8

Tue, 19 Dec 2017 12:40:00 Remote operation timout = 1200

Tue, 19 Dec 2017 12:40:00 Reset remote operation timout = 1200

Tue, 19 Dec 2017 12:40:00 Configuring client config thread limit to 1

Tue, 19 Dec 2017 12:40:00 CENTOS7X64 ProcessNix: Linux

Tue, 19 Dec 2017 12:40:08 13096: WNetAddConnection2() failed 53, user="landesk"

Tue, 19 Dec 2017 12:40:08 13096: WNetAddConnection2() with local domain failed 53, user="10.25.11.98\landesk"

Tue, 19 Dec 2017 12:40:09 CENTOS7X64 Processing for path C:\Program Files\LANDesk\ManagementSuite\LDLogon file C:\Program Files\LANDesk\ManagementSuite\LDLogon\Default Linux Configuration.ini

Tue, 19 Dec 2017 12:40:09 CENTOS7X64 Remote copy count:3

Tue, 19 Dec 2017 12:40:19 13096: WNetAddConnection2() failed 53, user="landesk"

Tue, 19 Dec 2017 12:40:19 13096: WNetAddConnection2() with local domain failed 53, user="10.25.11.98\landesk"

Tue, 19 Dec 2017 12:40:56 CENTOS7X64 No more paths/files in processing queues

Tue, 19 Dec 2017 12:41:01 Task complete, status 5

 

Scheduled Task Returns a 1083 Failure Code:

- This return code is typically permissions based. Make sure permissions are set correctly (as above) and check the /var/run/lock folder. This folder needs to have write permissions for all users in order for agent deployment to work. In the C:\ProgramData\LANDesk\Log\raxfer.log after you run the agent push you'll see an entry like this:

 

Tue, 19 Dec 2017 12:30:06 Tue Dec 19 12:30:14 MST 2017: ERROR: user [landesk] has insufficient permissions to install.

 

- Example command to list the folder permissions:

Linux1.png

- Example command to set the folder permissions:

Linux2.png

 

Scheduled Task Returns a 1026 Return Code:

- This is currently a bug that is being fixed. Run the task a second time and it should succeed.

Certificate Authentication failed for Unix Clients

$
0
0

Hi

 

Am using landesk for Unix enviornments, looks like first time am seeing this message as for all our Unix clients.

is it causing any issues.

 

 

Even client side am able to see this certificate under folder

HOSTNAME:/var/log # cd /opt/landesk/var/cbaroot/certs/

HOSTNAME:/opt/landesk/var/cbaroot/certs # ls -l

total 4

-r--r--r-- 1 landesk landesk 1516 Dec 13 14:49 99342113.0

HOSTNAME:/opt/landesk/var/cbaroot/certs #

 

Could you please let me know is it cause any issues for this servers.

linux agent 2017.3 remediation not using yum system repositories

$
0
0

hello ,

We're testing New version 2017.3.

installation/ inventory/ list of vulnerabilities are OK

When we try to remediate Linux vulnerabiliies, but "2017.3 agent" doesn't use yum system repositories but try to download rpm from the core server.

has someone got an idea ?

thanks !

 

 

example of logs for trying to update "bash" package ( error because no "bash-4.2.46-28.el7.x86_64.rpm" available on core server. )

 

[Wed Jan 17 11:56:59 2018 CET] Launcher::read() 642 bytes.

[Wed Jan 17 11:56:59 2018 CET] HTTP Header: HTTP/1.1 200 OK

[Wed Jan 17 11:56:59 2018 CET] Child exited.

[Wed Jan 17 11:56:59 2018 CET] Launcher::read() 2653 bytes.

[Wed Jan 17 11:56:59 2018 CET] HTTP Header: HTTP/1.1 200 OK

[Wed Jan 17 11:56:59 2018 CET] Child exited.

[Wed Jan 17 11:56:59 2018 CET] Script MD5: 301101d85500a2e6f9f9debde0a9371a

[Wed Jan 17 11:56:59 2018 CET] Standard script found: Yum detection script

[Wed Jan 17 11:56:59 2018 CET]     package (known script): http://coreserver/landesk_partage/Patchs/INTL/Redhat/*bash-4.2.46-28.el7.x86_64.rpm

[Wed Jan 17 11:56:59 2018 CET] Publishing message type: Software Distribution Scan

[Wed Jan 17 11:56:59 2018 CET] Software Distribution Action Start

[Wed Jan 17 11:57:00 2018 CET] Child exited.

[Wed Jan 17 11:57:00 2018 CET] Yum upgrade package: bash

[Wed Jan 17 11:57:00 2018 CET]   Launching script: yum install -y bash

[Wed Jan 17 11:57:00 2018 CET] Child exited.

[Wed Jan 17 11:57:00 2018 CET]    RV Status: 1

[Wed Jan 17 11:57:00 2018 CET]   Script status: 1

[Wed Jan 17 11:57:00 2018 CET] Script wrote to stderr:

[Wed Jan 17 11:57:00 2018 CET]     Package upgrade failed[1]: bash

[Wed Jan 17 11:57:01 2018 CET] Child exited.

[Wed Jan 17 11:57:01 2018 CET] Number of packages removed: 0

[Wed Jan 17 11:57:01 2018 CET] Number of packages updated: 0

[Wed Jan 17 11:57:01 2018 CET] Publishing message type: Software Distribution Result

 

 

rpm -qa | grep ivanti

ivanti-cba8-10.2-0.128.x86_64

ivanti-software-distribution-11.0-0.198.x86_64

ivanti-pds2-10.2-0.128.x86_64

ivanti-inventory-11.0-0.198.x86_64

ivanti-vulnerability-11.0-0.198.x86_64

ivanti-base-agent-11.0-0.198.x86_64

 

# arch

x86_64

 

cat /etc/redhat-release

Red Hat Enterprise Linux Server release 7.3 (Maipo)

Yum update and all detected are showing difference

$
0
0

Hi All,

 

finally am able to resolve all my agent and policy issues with clients after updating to 2017.3 SU1.

 

now am able to deploy single packages with the clients.

 

usually we will do *yum update* when we want to do full system update in Linux environment and this same i want to do from landesk as well.

 

i followed repair task like this.

 

operating system is running with centos 6.8 and now plan to upgrade to 6.9 ( with all packages).

 

- I have filtered all centos packages and added to one custom  group

- After that i repaired task with selecting target

- Task completed successfully with out any errors.

 

with this all should be upgraded.

 

after this when am doing yum update and its showing as

 

 

 

###### when am seeing detected for one client in iv anti console am seeing only below:

 

 

 

#### its have much difference.

 

is am missing any thing while doing repair? please let me know

 

thanks


Requirements and agent install process on AIX in 2016.3

$
0
0

Description

This is an overview of the methods of installing the AIX agent for 2016+ versions of LANDESK

Note: Before installing the agent, you need to verify the AIX OS to check if it is the supported platform DOC-23848

 

Requirements for Agent:

yum

openssl-1.0.1e

libgcc-4.8.2

libstdc++-4.8.2

libxml2-2.9.1

bash-4.2

zlib

 

Requirements for YUM:

YUM package: https://ftp.software.ibm.com/aix/freeSoftware/aixtoolbox/INSTALLP/ppc

Dependencies: https://ftp.software.ibm.com/aix/freeSoftware/aixtoolbox/ezinstall/ppc/ - yum_bundle_v1.tar

 

YUM readme:

https://ftp.software.ibm.com/aix/freeSoftware/aixtoolbox/ezinstall/ppc/README-yum

 

Method 1: Standard push

For detailed step see How to install the 32-bit and 64-bit Linux and Unix agents in 9.0 SP3 and 9.5

 

Method 2: Manual agent install

1. Please copy four files which are needed for a manual agent install to your AIX server. The files for each agent are listed below.
C:\Program Files\LANDesk\ManagementSuite\ldlogon\unix\aix\baseclient64.tar.gz
C:\Program Files\LANDesk\ManagementSuite\ldlogon\hash.0 (Note: The files isn't named hash.0 but will instead include the unique key combination of the core server in question. This is the public certificate for the core that needs to be included with the agent)
C:\Program Files\LANDesk\ManagementSuite\ldlogon\AgentName.sh (Note: This file will be named after the agent name you provided when creating the AIX agent)

2. Run the agentname.sh, if there is any permission issue, please change the permission of the file.

LDMS 2016 SU4 Agent Install Changes

$
0
0

Description:

When you install LDMS 2016 SU4 you may notice the agent installation process has changed.  The below doc describes the new nixconfig.sh which is now used for installation of the agent.  The goal for this change is to provide a simple and robust method for installing, upgrading, and removing the NIX agents. One note for this change is we no longer manage firewalls during the agent install.

 

Script Features:

  • Run in minimum shell – Bourne shell
  • Supports for running from no-exec mounted partitions
  • Support for non-root installations (sudo or RBAC)
  • Previous installation detection and upgrade support
  • Prerequisite reporting and installation
  • User defined repositories (yum and zypper supported via repo file)
  • Handle agent install, removal and upgrade (remove/install combined)
  • Support same guid across upgrades (breadcrumb remains in /opt/landesk/etc/guid file)
  • Pull files from Core by cURL or wget (wget provided for AIX, HP-UX and Solaris)
    • 1st attempt is for individual RPM packages (future)
    • 2nd attempt is archive package (existing tarballs)

 

Script Usage:

Usage: ./nixconfig.sh [OPTION]...

LDMS Agent installation, upgrade and removal processing.

 

-a core        FQDN of LDMS core.

-c INI_file    Uses INI configuration file for installation preferences.

-d             Add debug lines to output.

-h Prints help message.

-i pkg         Installs specified agent packages [all, cba8, ldiscan, sdclient or vulscan].

-l log_file    Log file for logging output [default: stdout].

-k cert_file   Certificate file.

-p Install prerequisites - pulled from distribution repositories or Core.

-r pkg         Remove specified agent packages [all, cba8, ldiscan, sdclient or vulscan].

-R             With option -r, ensures the /opt/landesk directory is gone including the GUID file.

-u repo_url    Custom repository definition (Linux Only).

 

The Core “push” installation will drop the Configuration shell script, Configuration INI file and nixconfig.sh scripts on the Linux hosts (Unix hosts will also get wget executables).  The INI file will drive the installation so package selection, Core address, certificates, etc. will be based on the INI file contents. For “pull” or "manual" installations the admin only needs the nixconfig.sh and access to either cURL or wget but, could provide an INI file on the local system which could be further configured by command line options above.

 

Script Examples:

  • Installation from no-exec /tmp partition (very important to include the path to script):

./bin/sh /tmp/nixconfig.sh -a win-pt2tta27i1n.shavlik.com -i all -p

 

  • Installation from standard /tmp partition:

./nixconfig.sh -a win-pt2tta27i1n.shavlik.com -i all -p

 

  • Remove installation leaving breadcrumb (add -R to remove breadcrumb):

./nixconfig.sh -r all

 

Notes:

  • - Execution of the script should be done from the directory of the script
  • - The “-u” (custom repo) option has not been fully tested.  You will need to specify the URL to a proper repository definition file hosted on the RPM repository (http://www.example.com/example.repo).
  • - Custom repositories and prerequisite installations can be defined in the INI file but the Core UI does not support them at this point.
    • PRQ=YES under Products in the INI file will install Prerequisites.
    • A section with the following format adds a custom repository (secondary repos can be added by separating the strings with a space):

[Custom Repository]

Repository=”http://www.example1.com/example1.repo

  • - To remove an agent from the Core via push the user can modify the INI file and set all of the products to NO (same as “-r all” on the CLI).
  • - Upgrades are handled by removing individual existing components and then reinstalling them.
  • - RBAC support on Solaris has not been fully tested but should be functional.
  • - Prerequisites for Linux distributions will be pulled from the distribution repository.
  • - Prerequisites for Unix systems are pulled from the Core so the LDMS user would just need to put the proper packages under the proper OS directory on the Core.

 

Questions:

  • Regarding pre-req packages does the core now ship with these files?
    • For AIX, HP-UX and Solaris, if the customer puts the prerequisite packages on their core in ldlogon/unix/(aix|hpux|solaris)/ directory, they can use the -p option and the script will pull the prerequisites from the Core (if wget or curl is available).  Linux customers need to have an accessible RPM repository.  The prerequisites are not shipped with the Core at this point because we have some legal work to go through to ensure we can redistribute the packages without issue.
  • Does the installer look automatically for a .0 (public key) cert file in its “run” directory?
    • Yes – they can be specified on the command line, INI file or just placed in the run directory – all should work
  • Is there currently failover logic around the pre-req repositories? 
    • We assume Linux distros will have access to a repository and yum/zypper setup properly to work (no “failover”).  Unix variants will only contact the Core for prerequisites.  If the Linux package manager doesn’t work, the prerequisite install will fail at this point.
  • Does the installer support offline installation?
    • Yes, if needed you can copy the nixconfig script, the INI, the .0, as well as the tar.gz files to the machine.  Then once you have set execution rights run these as root.

 

Pushing agents to Linux/Solaris using root account fails

$
0
0

Issue:

You configure the 'root' account as alternative credentials in the Scheduler Service configuration. You do an unmanaged device discovery and try to push the agent to the devices you configure. They fail, with a log file indicating that authorization has been refused. LANDESK uses Putty to log remotely into the Linux/Solaris device. When you enter the root account/password in Putty directly you will get diconnected immediately.

 

Cause:

Access through SSH can be disabled for the root account.

 

Resolution:

You need to login to the device directly, access the shell and edit the following file like this:

vi /etc/ssh/sshd_config

 

Search for PermitRootLogon using the following command:

:/PermitRoot

Change the configured value to 'yes' (using INSERT or i to be able to insert text)

 

exit VI

ESC+:w

ESC+:q

 

Now restart the SSH service:

svcadm restart svc:/network/ssh


This will now allow you to push your agent to this device using the root account

 

Note: if you don't login with root in the shell, it can be necessary to sudo these commands.

Manual Install of 64 Bit Linux/Unix Agent

$
0
0

Installing the 64 bit Linux or Unix agent Manually

 

Applies to LDMS 9.5, 9.6 and 2016.x

*This will apply to most 64 bit *nix Installations with a couple of modifications. I.E. for HPUX. create an HPUX agent and get files from the HPUX folder

 

Steps:

    1. Ensure the machine has the proper dependencies installed.  *nix Agent Dependencies
    2. Create a Linux64Install folder somewhere to gather files on the Linux machine
    3. Create a 64 Bit Linux agent in the LDMS console named linux64 or whatever you need to name it (try to avoid spaces when naming Linux agents, when we get to the point where we install manually spaces can add headache)
    4. Open c:\program files\landesk\management suite\ldlogon
    5. Copy any .0 files associated with the core to your Linux64Install folder
    6. Make a certs subfolder in your Linux64Install folder and make a copy of the .0 and baseclient64.tar.gz, vulscan64.tar.gz files into it. The agent install consumes the .0and baseclient64.tar.gz, vulscan64.tar.gz each time when it installs the agent, so if you need to run multiple installs you have quick access
    7. Locate linux64.sh (this will be named whatever you called the agent, same as with a windows agent ini) and copy it to your Linux64Install folder
    8. Browse to c:\program files\landesk\management suite\ldlogon\unix\
    9. Copy rmlinuxclient64.sh to your Linux64Install folder. (best practice to remove old agent first if applicable) typically you can get the most recent removal script from the following page. It is usually shipping but may be more recent
      1. How to uninstall the Linux and Unix agents and the options available
    10. Copy nixconfig.sh to your Linux64Install folder.
    11. Browse to c:\program files\landesk\management suite\ldlogon\unix\linux
    12. Copy baseclient64.tar.gz, vulscan64.tar.gz, and softwaredist.tar.gz to the Linux64Install folder
    13. Using a tool like WinSCP, copy the Linux64Install folder to /tmp on the Linux machine
    14. SSH to Linux machine as root and/or elevate the session after logging in
    15. Change to temp directory cd /tmp/Linux64Install
    16. Allow execute on removal script and install script chmod +x *.sh
    17. Remove old agent if applicable ./rmlinux64client.sh
    18. Install Agent ./linux64.sh
    19. Open firewall ports:What TCP and UDP Ports Must be Open on a Linux Agent's Firewall

How to troubleshooting AIX agent installation.

$
0
0


Description

During the install of an AIX or UNIX agent, several errors may occur. The most common would be a dependency to a certain library.

1.png

 


Troubleshooting

General Tips

1,As the error is libstdc++.a could not be loaded, we need verify if we can find the file from the machine.

2.png

2, You can also run ldd to find all the dependencies needed from a working agent.

3.png

3,If you can find the dependency, you need to add a directory that contains the missing libraries to our library path

(HPUX: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/freeware/lib64/)

41.png

4, If the file is 32-bit, you need add lib32 that contain the missing libraries.

32-bit

6.png

64-bit

7.png

 

 

 

Environment:

LANDesk Management Suite 9.5 9.6

Viewing all 182 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>