assessment and performance – MEC Networks Corporation https://mec.ph Your Partner in Innovation: The ICT and Physical Security Distributor in the Philippines Tue, 10 May 2022 07:36:08 +0000 en-US hourly 1 https://storage.googleapis.com/stateless-mec-ph-storage/2021/04/2a9b1c0d-cropped-mec-logo-email-signature-32x32.png assessment and performance – MEC Networks Corporation https://mec.ph 32 32 Netscout: Why the Hybrid Workforce Needs Smart Edge Monitoring https://mec.ph/news/why-the-hybrid-workforce-needs-smart-edge-monitoring/ Thu, 13 Jan 2022 07:51:43 +0000 https://staging.mec.ph/?p=54065 In this blog, you will get a glimpse into how smart edge monitoring enables businesses to handle IT’s most prominent challenges in problem identification and resolution within complicated, multivendor settings. Back in the early days of the pandemic, the workforce had no choice but to dramatically adapt to a work-from-home posture. But when restrictions were… Continue reading Netscout: Why the Hybrid Workforce Needs Smart Edge Monitoring

The post Netscout: Why the Hybrid Workforce Needs Smart Edge Monitoring appeared first on MEC Networks Corporation.

]]>

In this blog, you will get a glimpse into how smart edge monitoring enables businesses to handle IT’s most prominent challenges in problem identification and resolution within complicated, multivendor settings.

Back in the early days of the pandemic, the workforce had no choice but to dramatically adapt to a work-from-home posture. But when restrictions were gradually loosened and as cases went down, organizations began to bring some of their workers back to the office while following a rotational scheme. This hybrid mix of remote and on-site work has become the new normal. 

 

study from Accenture found that 63 percent of high-growth establishments have adopted a hybrid work model. These companies have prioritized guaranteeing that their workforce is both healthy and productive—a strategy that is producing bottom-line benefits.

 

The same study found that 83 percent of employees, in general, have taken a liking to the hybrid model. What is interesting is that digitally native Gen Z employees wanted some onsite work. The report disclosed that 74 percent of Gen Z respondents preferred to interact with coworkers in person, while 68 percent of baby boomers and 66 percent of Gen Xers expressed a desire to work in a hybrid model.

The Hybrid Work Model Highlights the Demand to Provide a Good End-User Experience

As more and more workers are asked to embrace a hybrid work model, it is crucial to guarantee that they have the tools needed to perform their jobs effectively. This means collaboration tools and enterprise applications must supply an excellent end-user experience.

 

Unified communications and collaboration (UC&C) systems are progressively being relied upon in this hybrid setting. Another survey found that 76 percent of workers use video conferencing for remote work. As corporate infrastructures become more complicated, depending on numerous vendors to give critical services both to home and office workspaces, IT encounters a considerable challenge in being able to recognize and resolve issues quickly.

 

With an expanding list of applications, including software as a service (SaaS), unified communications as a service (UCaaS), and data center-based services running at the edge, IT teams are under pressure to ensure the end-user experience. To achieve this, they need complete visibility and integrated analysis throughout the transaction ecosystem.

Many Edges, Little Visibility

With multiple edges in today’s IT environments, there are cracks in visibility that make it challenging to discern whether problems are occurring at the client edge, network edge, or data center/cloud service edge.

 

Traffic issues at these edges produce consequences such as delayed logins, slow responsiveness, and even outages in critical enterprise applications. Each of these can potentially affect worker productivity and customer service negatively.

 

As a result of the little visibility across a considerable number of edges, IT needs vendor-agnostic instruments that can speedily determine the origin of issues. Smart edge monitoring is the answer to this dilemma.

The Strength of Managing Packet Data at the Edge and Synthetic Testing Data

Smart edge monitoring is the result of carrying packet data, assembled at the edge, together with synthetic testing. NETSCOUT Smart Edge Monitoring is a novel, patent-pending architecture that delivers complete visibility and insights for IT teams to guarantee the highest-quality worker digital experience for any network or application, regardless of where users execute their jobs.


When performance issues start to materialize for the remote user, smart data transaction tests alert IT of a possible concern, which can be rapidly fixed via the solution’s robust service triage workflows. Early detection and rapid resolution of such problems reduce worker frustration and lost productivity. It may even avoid broader-scale outages.


NETSCOUT is an industry leader in smart edge monitoring. Their Smart Edge Monitoring solution provides complete, borderless monitoring and visibility. NETSCOUT is also the most effective route to ensure the best end-user experience in today’s intricate, hybrid workplaces.

The post Netscout: Why the Hybrid Workforce Needs Smart Edge Monitoring appeared first on MEC Networks Corporation.

]]>
Netscout: DDoS Attacks and Attackers Are Evolving https://mec.ph/netscout-news/ddos-evolving-attacks/ Fri, 02 Aug 2019 06:47:58 +0000 https://mec.ph/?p=37648 Why take the cloud with Mitel?

The post Netscout: DDoS Attacks and Attackers Are Evolving appeared first on MEC Networks Corporation.

]]>

Evident in NETSCOUT’s 14th Worldwide Infrastructure Security Report (WISR) findings is the current game of whack-a-mole between defenders and attackers. Wait. Nearly every year’s findings show proof of how a lot of things change, the more they stay similar.

 

Once a brand new exploit is identified, it never goes away. It gets used and abused in cycles during which activity spikes then recedes, often for years, until it comes back to life once more. There’s no higher example than Memcached servers and their potential for abuse.

 

The Rise of Memcached Attacks

 

In 2010, a presentation at the BlackHat USA Digital Self Defense conference indicated that there have been several insecure Memcached deployments internet-wide that might be abused and exploited. Not much happened—that is, till early 2018, once NETSCOUT’s Threat Intelligence Team warned that it “observed a major increase in the abuse of misconfigured Memcached servers residing on internet data Center (IDC) networks as reflectors/amplifiers to launch high-volume UDP reflection/amplification attacks.”

 

Weeks later, in February 2018, there was the first-ever terabit-size DDoS attack. This was followed days later by an attack nearly double that big, measuring 1.7 Tbps.

 

While exploits are identified, abused, and abandoned, attackers continue searching for the simplest path to success. They’re looking for the weakest link, and therefore the WISR has shown over the last fourteen years however the game is played between attackers and defenders. As one area of defense is made up, attackers advance to something else. If a crucial new service is launched, they check its resilience. That’s how it goes. That’s how it will perpetually go.

 

The Constant Evolution of DDoS Attacks:

 

  • The 2007 WISR mirrored significant concern over DDoS flooding of links and hosts. As a result, ISPs created investments in their mitigation capabilities to prevent these attacks. By the 2008 WISR, ISP concern over DDoS flooding of links and hosts had fallen within the rankings from 24-karat gold to 11 November. Attackers then began targeting applications.
  • In 2009, network operators centered their defenses against lower-bandwidth and application-layer DDoS attacks. This led to a modification in techniques and a comeback to volumetric attacks in 2010. “Based upon our experiences operating with operators over the last year, we tend to believe this huge increase in attack-traffic bandwidth is also partly due to operators focusing their defenses against lower-bandwidth and application-layer DDoS attacks. Attackers could have had to ‘up the ante’ to overwhelm the defenses and bandwidth capacity of defenders,” same report authors.
  • By 2012, network operators had invested with each in on-premises protection against low-bandwidth application-layer attacks and cloud-based defenses for high-volume attacks. So, what did attackers do? They modified techniques once more, unleashing complicated, multivector offenses that enclosed high-volume, application-layer, and stateful-infrastructure assaults all in one sustained attack.
    “This year’s results ensure that application-layer and multivector attacks are continuing to evolve whereas volumetrical attacks are beginning to plateau in terms of size,” scan the 8th annual WISR. “While eighty-six reported application-layer attacks targeting internet services, most concerning is that multivector attacks are up markedly. Attackers have currently turned to sophisticated, long-lived, multivector attacks—combinations of attack vectors designed to chop through the defenses a corporation have in place—to accomplish their goals.”

 

This year’s WISR found attackers had yet again shifted their focus to stateful infrastructure attacks targeting firewalls and IPS devices. These attacks virtually doubled, from 16 pf in 2017 to 31st in 2018. One reason firewalls and ISP devices are targeted? The probability of success is fairly high. Of those who experienced stateful attacks in 2018, 43rd reported that their firewall and/or IPS contributed to an outage throughout the attack.

 

Another fascinating finding was that SaaS, cloud, and information center services were all progressively targeted by attackers. Adversaries typically target new services because they’re viewed as less mature, a lot of vulnerable targets.

 

SaaS, Cloud, and Data Center DDoS Attack Trends

 

  • SaaS services: 2018 information showed a threefold year-over-year increase within the variety of DDoS attacks against SaaS services, from 13 to 41st

  • Third-party information center and cloud services: the quantity of DDoS attacks against third-party information centers and cloud services conjointly showed a threefold increase in 2018, from 11 November to 34th

  • Service providers: Cloud-based services were more and more targeted by DDoS attacks, up from 25th in 2016 to 47th in 2018

 

Looking ahead to next year, we all know that the innovation will continue. Simply since the close of the WISR survey period, NETSCOUT’s Threat Intelligence Team has disclosed the following:

 

  • Mirai DDoS attacks have moved from IoT to Linux: Threat actors are learning from their experience with IoT malware to focus on commodity Linux servers. For example, the Hadoop YARN vulnerability was initially used to deliver DemonBot, a DDoS malware, to IoT devices. Soon after, threat actors used the vulnerability to install Mirai on Linux servers, blurring the road between IoT and server malware.
  • Mobile phones are progressively employed in DDoS attacks: “Attackers have recently begun launching CoAP reflection/amplification DDoS attacks, a protocol primarily used nowadays by mobile phones in China, but expected to grow with the explosion of the Internet of Things (IoT) devices. like any reflection/amplification attack, attackers begin by scanning for abusable addresses, then launch a flood of packets spoofed with the source address of their target,” the team warned in January this year.

 

DDoS attacks are perpetually evolving, and attackers are continuously trying to find new targets and adopting new techniques. This can be why NETSCOUT has been advocating over the better part of the past decade for a multilayered defensive approach that includes on-premises protection for your stateful infrastructure and applications, with cloud-based protection from high-volume attacks.

Download Free Netscout Resource

 

Get access to authentic content from one of the leading network assessment and performance experts in the world from the Philippines’ premiere technology provider.

The post Netscout: DDoS Attacks and Attackers Are Evolving appeared first on MEC Networks Corporation.

]]>
Netscout: Location’s Power in Networks Without Borders https://mec.ph/netscout-news/power-location-networks/ Mon, 01 Jul 2019 08:32:09 +0000 https://mec.ph/?p=37224 Netscout discusses removing borders within the enteprise network.

The post Netscout: Location’s Power in Networks Without Borders appeared first on MEC Networks Corporation.

]]>

NETSCOUT’s “visibility without borders” vision is focused on the idea that digital transformation and virtualization erase the borders across components and layers that exist in today’s networks, bringing end-to-end visibility into however networks work and perform. Demolishing the borders that isolate network elements frees operators from the constraints of location, however, it doesn’t abstract network performance from the location. On the contrary, networks without borders unlock the power of location.

Removing Borders within the Enterprise Network

Just as the internet takes down borders across cultures and nations whereas supporting extremely localized content and services, removing the borders in our communication networks permits operators to extract the value of location in ways in which aren’t attainable in today’s networks, wherever function remains tied to a fixed location among the network architecture, and traffic is treated as an identical stream of bits transmitted across the network.

As borders come down, operators not solely gain (and need) network visibility, they conjointly gain (and need) flexibility. During a virtualized network, they get to settle on what goes where. That functions ought to be kept during a centralized location or within the cloud? Which of them ought to instead be moved towards the edge? And wherever is that the suitable edge – the cell site, the basement of an enterprise, the central office, or a metropolitan data center? How distributed should the network be? And how should totally different traffic flows, services, and content varieties be managed among such distributed networks? That bit ought to be transmitted first?

Network Topology within the Age of 5G

In the age of 5G, networks become dynamic, agile, and self-optimizing, and performance progressively depends on real-time resource allocation and network topology–which during a virtualized network translates into the location of function.

 

And location doesn’t solely impact performance. It conjointly impacts the cost of deploying and running the network, the type of services and also the quality of service the network will support, and also the revenue streams it will command.

 

Latency may be a prime example of this. By deploying computing resources nearer to the edge and using network slicing to keep the latency low for specific types of traffic or services, operators need to modify their network’s topology, however, they’ll conjointly generate new revenues from new services that rely upon latency, like online gaming or some IoT enterprise applications.

 

Edge computing and network slicing are the main technologies that offer the location to its new prominence. They operate orthogonally: edge computing horizontally from the center to the periphery of the network; network slicing vertically with parallel channels that cross the network. Their intersection magnifies the power of location in optimizing the utilization of network resources. Not all traffic is formed equal, and edge computing and network slicing are designed to manage the variety in traffic requirements, among the capabilities of the deployed wireless infrastructure, and extract the very best value from the network topology.

Extracting more value From the Network

But the adoption of edge computing and network slicing is merely the primary step in extracting value from the choice of location. Even more significantly, operators need to decide a way to implement them to fully benefit from the latency – in addition as higher capacity, reliability, and security – that 5G guarantees. There’s no unique answer to the what-goes-where queries we asked earlier. every operator can need to find its own answers and because this is all new territory, the whole wireless ecosystem needs to learn – vendors included – a way to use the data available, however still mostly underused, to extract more value from their networks.

 

That begs the question of wherever the value of the network comes from. Traditional metrics, like throughout or dropped calls, are no longer enough to capture network value. To maximize network value, operators need to optimize network performance for specific outcomes and strategic goals.

Key queries Network Operators ought to raise to improve the value:

  • What should an operator optimize?
  • What cost-benefit tradeoffs it’s willing to make?
  • In the latency example, that applications ought to have guaranteed low latency?
  • And which of them are ok on best efforts?
  • How is the operator going to balance the requirements of various traffic flows cost-effectively?
  • How much is it willing to pay an extra cost and effort to lower the latency on some applications?
  • How much should it expect to save from running some traffic as best-efforts?
  • Operators got to answer these two sets of queries – what goes where and what to optimize – as they decide a way to deploy edge computing and network slicing in commercial deployments. they have visibility across the network to guide through this method, create the proper decisions, and still refine the capabilities of their networks.

Because edge computing and network slicing add two dimensions (horizontal and vertical) that move with one another, they conjointly increase the complexity of the optimization process and also the quantity of data to be processed. And to induce to the correct answers for their specific network, services, demand, and strategy, operators are moving to a lot of power but also a lot of intensive approaches to grasp their networks and use what they learn during a continuous optimization process:

  • Collect reliable, detailed, location-aware, real-time data on network performance at the application or service level
  • Develop the capabilities to access the data as required (e.g., performance data across vendors)
  • Drill down network data at the layer level, at the network slice level, and at the microservice level and relate it to the quality of experience and performance for various users or devices and services
  • Identify the relevant data (e.g., anomaly detection, user experience) and ignore the remainder
  • Analyze, monitor and troubleshoot the network in real-time, with a high spatial granularity
  • Generate responses to deal with issues and optimize the network topology and also the real-time resource allocation
  • Automate the method, and repeat to continue to improve network performance

This is a difficult transformation which will end up giving operators data that’s too detailed to lead to issue resolution or learning, making duplication and fragmentation if the optimization process is completed individually for various functions, or more generally creating excessive complexity. Visibility might end up obfuscating the workings of the network instead of exposing them.

Managing the complexity of modern Networking

To avoid falling into this predicament, operators need to establish a strong and reliable optimization process that permits them to travel deeper to induce a much better understanding of the network once needed, however without adding unmanageable overhead. the combination of learning (AI and machine learning) and automation can help operators manage the extra complexity that technologies like edge computing and network slicing bring and make it attainable to extract the worth of the location, and enable new ways in which to operate and profit from the network.

 

For sure, we are still taking the primary steps during this direction and also the move to a distributed, location-aware and function-aware network needs time, effort and commitment. however, it’s conjointly an opportunity that operators cannot afford to sit out if they need to make their 5G networks shine.

The post Netscout: Location’s Power in Networks Without Borders appeared first on MEC Networks Corporation.

]]>
Netscout: ICT Cloud Management Challenges https://mec.ph/netscout-news/netscout-cloud-management-challenges/ Thu, 13 Jun 2019 02:34:23 +0000 https://mec.ph/?p=36812 Today’s growing trend toward cloud adoption – whether or not it’s a multi-cloud strategy using completely different cloud providers, or a hybrid-cloud one counting on public and private clouds – has opened the door to tremendous transformational opportunities.

The post Netscout: ICT Cloud Management Challenges appeared first on MEC Networks Corporation.

]]>

Today’s growing trend toward cloud adoption – whether or not it’s a multi-cloud strategy using completely different cloud providers, or a hybrid-cloud one counting on public and private clouds – has opened the door to tremendous transformational opportunities. It’s also added significant complexity and challenges for IT. as an example, widespread cloud migration has led to new application architectures, like micro-services, similarly as self-conscious, self-healing, self-scaling and self-optimizing software-defined infrastructure that supports these services. This added complexity is stretching IT professionals to seek out ways to effectively monitor and secure services across these environments with success.

Achieving Cloud Visibility

According to the RightScale 2018 State of the Cloud Report, 71 percent of respondents found governance and control to be a challenge, with several IT organizations lacking the visibility required to manage cloud environments.

 

Hybrid and multi-cloud environments raise the best hurdles once it involves detection when and how security breaches or service failures can occur then what steps have to be compelled to be taken to resolve issues before end-users are adversely affected. What’s needed is visibility without boundaries – gaining an entire in-depth view into the whole hybrid-cloud environment and every one of its various interdependencies.

 

Visibility without boundaries boils all the way down to seeing across all service layers, as well as applications, infrastructure, and their various dependencies. Since each application transaction is communicated across the virtual or physical network, wire data or traffic flows are the best supply of data needed to achieve visibility. In short, IT desires continuous end-to-end cloud monitoring, and in-depth analysis of the traffic flows over the network so as to achieve holistic visibility across applications and also the entire service delivery infrastructure. The key to cloud visibility is extracting, collecting, organizing, and analyzing pertinent information from the wire data that’s exchanged between application workloads within the form of East-West and North-South traffic flows that span private cloud, public cloud, and also the data center. This traffic-based information will then be leveraged to form sensible data that are generated at the gathering point. The ensuing smart data, that is gathered in real-time provides enterprises with actionable intelligence that permits IT to spot problems, optimize infrastructure and application performance, and discover threats and vulnerabilities in line with demand.

 

Analyzing the smart data to extract key metrics, and disseminative this very important data in conjunction with services inter-dependencies through dashboards, alerts, and workflows empowers IT to better perceive applications and service availability, reliability and responsiveness. Visibility without boundaries permits IT to cut through the complexity of a cloud environment and troubleshoot problems in real-time, mitigating potential network, security, and compliance risks.

 

By harnessing the insights of smart data, organizations will retain visibility and control over their hybrid- and multi-cloud environments and leverage confidently the strategic worth of flexibility, agility, and measurability required to stay competitive in today’s extremely connected world. a smart data approach offers a detailed image of the applications and services, and their various dependencies, giving IT organizations the visibility they have to make sure success.

Download the Free NETSCOUT Resource

 

Get access to authentic content from one of the leading network assessment and performance experts in the world from the Philippines’ premier technology provider.

The post Netscout: ICT Cloud Management Challenges appeared first on MEC Networks Corporation.

]]>
Cloud-based Internet Access in the Philippines by Ruckus https://mec.ph/ruckus-news/cloud-based-internet-philippines/ Wed, 08 May 2019 23:26:14 +0000 https://mec.ph/?p=35840 Cloud-based Internet Access in the Philippines by Ruckus Networks

The post Cloud-based Internet Access in the Philippines by Ruckus appeared first on MEC Networks Corporation.

]]>

Ruckus Networks announced that Ruckus Cloud Wi-Fi, a cloud-managed Enterprise Wi-Fi solution is now available in the Philippines and in the remainder of the Asia Pacific region.

 

Ruckus Cloud Wi-Fi allows network administrators to manage multiple locations through a single web- or mobile app-based dashboard. This solution is designed to help ‘Lean IT’ staff at schools, retail, professional services, warehouses, and hotels effectively manage a multi-site network while delivering exceptional connectivity for students, guests, and customers.

 

Additionally, it allows organizations to lessen their total cost of ownership (TCO) compared to other solutions by matching cloud efficiency with high-performance access points (APs) that serve more users over a wider area.

 

The Ruckus Cloud Wi-Fi solution offers short videos to show its new features. It also has a network to auto-update and remain secure, informing them through text messages and push notifications through a mobile app of any outages.

 

Designed to be simple and scalable, Ruckus Cloud Wi-Fi allows businesses of all sizes to deploy and remotely oversee Wi-Fi networks across multiple sites. With just a single interface to create new Wi-Fi networks, add access points, and manage network performance and activity, the solution lets administrators manage from anywhere via a simple and intuitive, web-based interface or the Ruckus Cloud mobile app.

 

“As organizations in the Asia Pacific region undergo digital transformation, reliable and high-performance Wi-Fi is a must to accelerate growth and improve operational efficiencies,” said Kho Teck Meng, regional sales director, Asean, for Ruckus Networks. “By using the same enterprise-grade technology present in all of our installations, customers now have access to a combination of simplicity and performance that wasn’t previously available. We look forward to helping our partners across the region drive greater value for their customers with a solution that is easy to deploy and manage,” added Meng.

 

The solution is the first foundational service of the many software-as-service offerings by Ruckus Network.

 

Ruckus Cloud-Wifi is now available in the Asia Pacific Region.

The post Cloud-based Internet Access in the Philippines by Ruckus appeared first on MEC Networks Corporation.

]]>
5 Main Reasons To Embrace Cloud Computing In Your Enterprise https://mec.ph/cloud-computing/reasons-to-embrace-cloud/ https://mec.ph/cloud-computing/reasons-to-embrace-cloud/#comments Wed, 24 Apr 2019 01:57:31 +0000 https://mec.ph/?p=35428 Digital Transformation trends are empowering organizations of all sizes to improve and innovate their businesses. Some of these trends are completely changing the game including Cloud Computing.   Cloud Computing is the practice of storing, managing, and processing data using a network of remote servers that are hosted on the Internet. Cloud computing allows organizations… Continue reading 5 Main Reasons To Embrace Cloud Computing In Your Enterprise

The post 5 Main Reasons To Embrace Cloud Computing In Your Enterprise appeared first on MEC Networks Corporation.

]]>

Digital Transformation trends are empowering organizations of all sizes to improve and innovate their businesses. Some of these trends are completely changing the game including Cloud Computing.

 

Cloud Computing is the practice of storing, managing, and processing data using a network of remote servers that are hosted on the Internet. Cloud computing allows organizations to access their information and data virtually anytime, anywhere.

 

A recent survey done by Logic Monitor suggests that 83% of Enterprise Workloads will be in the cloud by 2020. On-premise workloads are predicted to shrink from 37% today to 27% of all workloads by 2020.

Percentage of Where Will Workloads Run (Today versus 2020)

What could be the reasons why enterprises are leaning on to the cloud? In this article, we will help you gain a better understanding of the cloud and its potential benefits to your organization.

1.) Low Cost

Being virtually available, enterprises (especially SMEs and start-ups) can save substantial capital and operational costs through a reduction in spending on equipment, infrastructure, maintenance, and software.

2.) Accessibility

The cloud allows access to information and data through wireless devices, whenever and wherever, as long as you’re connected to the Internet thus increasing the organizations’ agility.

3.) Scalability

Organizations can easily scale up or scale down operations together with the storage needs to suit variations based on the business’ demands. Most cloud service providers can increase or decrease their packages within minutes as well.

4.) Improved Collaboration

A cloud computing model can give your organization the ability to communicate and share with ease. Employees can easily share data and collaborate to complete tasks even from different locations. It also allows multiple users to share and work on data at the same time.

5.) Business Continuity

In the event of a disaster, be it natural or technical, organizations must be confident that their data are protected. Thanks to the cloud’s backup mechanism, the data will always be available and it can be restored in a timely manner as long as users have an Internet connection.

 
The cloud presents limitless benefits and opportunities for enterprises. However, migrating to the cloud requires a well-planned strategy and a future-proof scalable and secure network infrastructure. At MEC, we offer products and solutions that support and protect organizations of any size as they transition to the cloud.

The post 5 Main Reasons To Embrace Cloud Computing In Your Enterprise appeared first on MEC Networks Corporation.

]]>
https://mec.ph/cloud-computing/reasons-to-embrace-cloud/feed/ 1